Insights Cyber Security of AI: Government publishes Code of Practice

Contact

The Government has published a voluntary cyber security Code of Practice for artificial intelligence.

The Code of Practice follows a Call for Views that was announced last year (which we commented upon here) which pointed to the need to address the  specific cyber security risks associated with AI models and systems at every stage of the AI lifecycle. According to the Government, the Code of Practice “sets out the baseline cyber security principles to help secure AI systems and the organisations which develop and deploy them”.

The Government has also confirmed that the Code will be submitted to the European Telecommunications Standard Institute (“ESTI”) with the intention that it will be used as the basis for a new global standard.

The Code itself is substantively similar to the draft version on which views were sought last year. It identifies the five stages of the AI lifecycle as being (1) secure design; (2) secure development; (3) secure deployment; (4) secure maintenance; and (5) secure end of life. At each stage, the Code sets out which of the 13 Principles (see below) will apply, as well as the stakeholder to whom the relevant Principle will primarily apply.

The Principles mirror those that were included in the draft Code, but with the addition of a final principle which focusses on the end of life of an AI model, covering the transferring or sharing of ownership of the training data and/or a model as well as the decommissioning of a model and/or system.

The 13 Principles are as follows:

  1. Raise awareness of AI security threats and risks;
  2. Design your AI system for security as well as functionality and performance;
  3. Evaluate the threats and manage the risks to your AI system;
  4. Enable human responsibility for AI systems;
  5. Identify, track and protect your assets;
  6. Secure your infrastructure;
  7. Secure your supply chain;
  8. Document your data, models and prompts;
  9. Conduct appropriate testing and evaluation;
  10. Communication and processes associated with End-users and Affected Entities;
  11. Maintain regular security updates, patches and mitigations;
  12. Monitor your system’s behaviour;
  13. Ensure proper data and model disposal.

Accompanying the Code of Practice is an ‘Implementation Guide’ which is intended to “guide stakeholders across the AI supply chain on the Code’s implementation by providing non-exhaustive scenarios as well as examples of practical solutions to meet these provisions”. For example, it includes recommendations on the content of AI security training programmes and risk assessments.  The Government also intends to submit the implementation guide to ETSI.

Commenting on the publication of the new Code, the National Cyber Security Centre’s Chief Technology Officer, Ollie Whitehouse, said “the new Code of Practice, which we have produced in collaboration with global partners, will not only help enhance the resilience of AI systems against malicious attacks but foster an environment in which UK AI innovation can thrive. The UK is leading the way by establishing this security standard, fortifying our digital technologies, benefiting the global community and reinforcing our position as the safest place to live and work online”.

To read the Code of Practice in full, click here. To read the implementation guide, click here.