/Passle/5d9604688cb6230bac62c2d0/SearchServiceImages/2025-03-28-14-16-06-622-67e6af26de77810fdca13a84.jpg)


The government launched its new voluntary Code of Practice for the Cyber Security of AI earlier this year. The Code sets security standards for AI systems via thirteen principles designed to give a basis for the safeguarding of AI systems and the organisations that develop and deploy them.
Background
The Code identifies key “stakeholders”, comprised of: “developers”, “system operators”, “data custodians”, “end-users”, and “affected entities”. These cover a very wide range of actors, and the Code uses these are defined terms in such a way that each receive specific treatment.
The Code is structured around provisions that are required for the Code (“shall”), recommendations (“should”), and possibilities (“may”). Though the Code is voluntary, any organisation who chooses to comply must follow any requirement indicated by “shall”.
The government published its Implementation Guide for the AI Cyber Security Code of Practice alongside the basic Code. This gives practical advice to stakeholders on how to comply with the Code's requirements via more detailed explanations and examples.
The principles
The Code is set out into 13 principles, which can be summarised as follows:
1 Raise Awareness of AI Security Threats and Risks:
This principle emphasises the importance of understanding the specific security threats and risks associated with AI systems. It encourages organisations to educate their staff and stakeholders about potential vulnerabilities and the impact of cyber threats on AI.
2 Design AI Systems for Security as well as Functionality and Performance
Security should be a fundamental consideration during the design phase of AI systems. This principle advocates for integrating security measures from the outset, ensuring that AI systems are resilient against attacks and can maintain their integrity and functionality.
3 Evaluate Threats and Manage the Risks to your AI System
Organisations are encouraged to continuously assess the threats to their AI systems and implement risk management strategies. This involves identifying potential vulnerabilities, evaluating their impact, and taking appropriate measures to mitigate risks.
4 Enable Human Responsibility for AI Systems
Human oversight is crucial in the deployment and operation of AI systems. This principle stresses the need for clear accountability and responsibility, ensuring that humans remain in control and can intervene when necessary.
5 Identify, Track, and Protect your Assets
Organisations should maintain an inventory of their AI assets, including data, models, and infrastructure. This principle highlights the importance of tracking and protecting these assets to prevent unauthorised access and ensure their security.
6 Secure Your Infrastructure
The infrastructure supporting AI systems must be robust and secure. This principle advocates for implementing strong security measures to protect the hardware and software components that underpin AI systems.
7 Secure your Supply Chain
Organisations must follow secure software supply chain processes for AI model and system development. If using models or components that are not well-documented or secured, they must justify their decision and implement mitigating controls. For example, if a component lacks strong documentation, a risk assessment should be conducted, and the decision documented.
8 Document your Data, Models and Prompts
Developers should maintain a clear audit trail of system design and post-deployment maintenance plans, including security-relevant information like training data sources and potential failure modes. For instance, cryptographic hashes for model components should be released to verify authenticity. Documentation should detail how public training data was obtained to identify potential data ‘poisoning attacks’.
9 Conduct Appropriate Testing and Evaluation
All models, applications, and systems must undergo security testing before release. Independent security testers with relevant technical skills should be used. Findings from testing should be shared with System Operators to inform their own evaluations. Developers should ensure model outputs do not allow reverse engineering of non-public aspects or unintended influence over the system.
10 Communication and Processes associated with End-users and Affected Entities
System Operators must clearly communicate to end-users how their data will be used, accessed, and stored. Accessible guidance should be provided for the use, management, and configuration of AI systems, including limitations and potential failure modes. Developers and System Operators should support end-users during and after a cyber security incident, with documented processes agreed upon in contracts.
11 Maintain Regular Security Updates, Patches and Mitigations
Developers should provide security updates and patches, notifying System Operators who then deliver these to end-users. Mechanisms and contingency plans should be in place to mitigate security risks when updates cannot be provided.
12 Monitor your System’s Behaviour
System Operators should log system and user actions to support security compliance and incident investigations. Logs should be analysed to detect anomalies, security breaches, or unexpected behaviour over time. Monitoring internal states of AI systems can address security threats and enable future security analytics.
13 Ensure proper Data and Model Disposal
When transferring or sharing ownership of training data or models, Data Custodians should be involved to securely dispose of these assets, preventing security issues from transferring between AI systems.
Our comments
As AI continues to revolutionise various sectors, ensuring its security is paramount to protect our digital economy and society, and so the Code is a welcome development as a positive step towards this. It provides a structured approach to addressing the cyber security challenges posed by AI, which is essential for fostering trust and confidence in this technology.
By following these principles, the idea is that organisations can better protect their AI systems from cyber threats, ensuring the continued growth and success of AI innovations. Further embracing the Code and compliance with it and its principles is likely in the long term to be not only advantageous on its own by improving security, but it should also serve as an outward signal of a commitment to the safe and secure use of AI technologies to others in a fast-changing marketplace.
It will be interesting to see how many organisations choose to implement this voluntary Code. The level of uptake and integration by stakeholders of these principles into their AI security practices remains unclear. Some organisations may fully embrace the guidelines, others might be slower to adopt them. The true effectiveness of the Code will depend on its widespread acceptance and implementation.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.
For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.
This article was written by Samantha Howell, Hayden Searle and Richard Pettit.