|
Work Item Reference |
ETSI Doc. Number |
STF |
Technical Body in Charge |
Standard Not Ready For Download
|
|
DTR/SAI-0015
|
TR 104 066
|
|
SAI
|
|
Current Status (Click to View Full Schedule) |
Latest Version
|
Cover Date |
Standstill |
Creation Date |
|
Final draft registered by ETSI Secretariat
(2024-06-18)
|
0.0.3 Draft
|
|
View Standstill Information
|
2024-01-30
|
|
Rapporteur |
Technical Officer |
Harmonised Standard |
|
|
Scott Cadzow
|
Kim Nordström
|
No
|
|
|
Title
|
Securing Artificial Intelligence; Security Testing of AI Security Testing of AI
|
Scope and Field of Application
|
The purpose of this work item is to identify methods and techniques that are appropriate for security testing of AI-based components including to show that the requirements for explicability and transparency are met by the test objectives. Security testing of AI has some commonalities with security testing of traditional systems but provides new challenges and requires different approaches, due to (a) significant differences between subsymbolic AI and traditional systems that have strong implications on their security and on how to test their security properties, (b) non-determinism since AI-based systems may evolve over time (self-learning systems) and security properties may degrade, (c) test oracle problem, assigning a test verdict is different and more difficult for AI-based systems since not all expected results are known a priori, and (d) data-driven algorithms: in contrast to traditional systems, (training) data forms the behaviour of subsymbolic AI. The scope of this work item is to cover the following topics: • security testing approaches for AI • security test oracles for AI • definition of test adequacy criteria for security testing of AI.
|
Supporting Organizations
|
Cadzow Communications, Fraunhofer FOKUS, Huawei Tech.(UK) Co.. Ltd, BT plc
|