The UK government has released a voluntary AI Cyber Security Code of Practice, aiming to establish baseline security standards for artificial intelligence systems. The policy, developed by the Department for Science, Innovation and Technology (DSIT), addresses growing cyber threats associated with AI and is intended to shape a future global standard through the European Telecommunication Standards Institute (ETSI).
Unlike traditional software, AI systems present unique security challenges such as data poisoning, model obfuscation, and indirect prompt injection. The UK’s new framework outlines best practices for mitigating these risks across the AI lifecycle, covering design, deployment, maintenance, and eventual decommissioning.
The voluntary code was developed following public consultation, with 80 percent of respondents supporting government intervention to clarify AI security requirements. The initiative builds on previous guidelines issued by the UK’s National Cyber Security Centre and aligns with international standards from organizations like the National Institute of Standards and Technology.
DSIT has structured the code around thirteen core principles, guiding AI developers, system operators, and data custodians in areas such as:
- Ensuring AI security is considered from design to decommissioning
- Implementing robust protections against adversarial attacks
- Maintaining transparency in AI decision-making
- Strengthening data governance and model security
- Defining clear responsibilities for AI system operators
While the code is voluntary, the UK government aims to integrate its recommendations into future ETSI regulations, potentially influencing global AI cybersecurity standards.
The framework targets a broad range of stakeholders in the AI ecosystem, including:
- Developers, responsible for creating or adapting AI models
- System operators, who integrate AI into business processes
- Data custodians, managing AI training data and permissions
- End-users, including employees and consumers using AI-powered tools
The code does not apply to academic AI research projects that are not intended for deployment.
The UK’s move comes amid increasing global scrutiny over AI security risks. The European Union has introduced strict regulations under the EU AI Act, while the United States is strengthening cybersecurity oversight through executive orders and federal agency guidelines. By proposing a voluntary yet structured framework, the UK seeks to influence global AI security policies while maintaining industry flexibility.
While many AI companies have welcomed the clarity, some critics argue that voluntary standards may not be enough to enforce compliance, particularly as AI-driven cyber threats become more sophisticated. The UK government has signaled that it will monitor adoption and consider regulatory enforcement if needed.
As AI continues to transform industries, securing its infrastructure will remain a top priority. The UK’s AI Cyber Security Code of Practice is a step toward ensuring AI systems are both innovative and resilient in the face of emerging threats.
Need Help?
If you’re concerned or have questions about how to navigate the UK or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.