TB SiteMapbreadcrumb separatorSAIbreadcrumb separatorSAI Activity Report

ISG SAI Activity Report 2019

Chair: Alex Leadbeater

Developing technical specifications that mitigate against threats arising from the deployment of AI, and threats to AI systems, from both other AIs, and from conventional sources.

Our Industry Specification Group on Securing Artificial Intelligence (ISG SAI) held its first meeting in October 2019.

The underlying rationale for ISG SAI is that autonomous mechanical and computing entities may make decisions that act against the relying parties, either by design or as a result of malicious intent.

The group has a primary responsibility to develop technical specifications that mitigate against threats arising from the deployment of AI, and threats to AI systems, from both other AIs, and from conventional sources. As a pre-standardization activity, the ISG SAI is intended to frame the security concerns arising from AI and to build the foundation of a longer-term response to the threats to AI in sponsoring the future development of normative technical specifications.

In particular the group’s work addresses three aspects of AI in the standards domain:

  • Securing AI from attack, e.g. where AI is a component in the system that needs defending.
  • Mitigating against AI, e.g. where AI is the ‘problem’ (or used to improve and enhance other more conventional attack vectors).
  • Using AI to enhance security measures against attack from other things, e.g. AI is part of the ‘solution’ (or used to improve and enhance more conventional countermeasures). 

ISG SAI aims to develop technical knowledge that acts as a baseline in ensuring that AI systems are secure. Stakeholders impacted by the activity of the group include end users, manufacturers, operators and governments.

Five new Work Items were adopted at the kick-off meeting:

The group’s first WI is an AI threat ontology, to be published as a Group Report (GR). This seeks to align terminology across different stakeholders and multiple industries. It will examine what is meant by an AI threat in the context of cyber and physical security, and how it might differ from threats to traditional systems.

A Group Report on the data supply chain will summarise the methods currently used to source data for training AI along with the regulations, standards and protocols that can control the handling and sharing of that data. It will then provide gap analysis on this information to scope possible requirements for standards for ensuring traceability and integrity in the data, associated attributes, information and feedback, as well as the confidentiality of these.

A Group Specification (GS) on the Security Testing of AI will identify objectives, methods and techniques that are appropriate for security testing of AI-based components. The purpose of this work item it to identify objectives, methods and techniques that are appropriate for security testing of AI-based components

A GR presenting an SAI problem statement will define and prioritize potential AI threats along with recommended actions. It describes the challenges of securing AI-based systems and solutions – including challenges relating to data, algorithms and models in both training and implementation environments

A further GR presenting a mitigation strategy aims to summarize and analyze existing and potential mitigation against threats for AI-based systems. The goal is to have guidelines for mitigating against threats introduced by adopting AI into systems.

LOOK OUT FOR IN 2020 – ISG SAI WORK IN PROGRESS:

  • Group Specification (GS) on security testing of AI - identifying objectives, methods and techniques appropriate to security testing of AI-based components
  • Group Report (GR) on AI threat ontology - defining AI threats and how they might differ from threats to traditional systems
  • GR on data supply chain – summarising methods currently used to source data for training AI, along with the regulations, standards and protocols to control handling and sharing of that data
  • GR on SAI problem statement – description of challenges of securing AI-based systems and solutions, relating to data, algorithms and models in both training and implementation environments
  • GR on SAI mitigation strategy – analysis of existing and potential mitigation against threats for AI-based systems