U.S. and U.K. Cybersecurity Agencies Release Joint AI Guidelines

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/08/2023
In News

U.S. and U.K. Cybersecurity Agencies Release Joint AI Guidelines

Amidst anticipation surrounding the EU AI Act, collaborative efforts from departments in the United States and the United Kingdom have yielded joint guidelines on AI. Released on November 26, the “Guidelines for Secure AI System Development” comes from the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the U.K.’s National Cyber Security Centre.

 

These guidelines respond to the heightened value of AI as a target and the potential for AI systems to be weaponized in attacks. Targeted at AI system providers, whether developing in-house or integrating external components, the principles underscore a “secure by design” approach, emphasizing security ownership and transparency.

 

Spanning a comprehensive 20-page document, the guidelines cover the entire AI system life cycle, encompassing design, development, deployment, operation, and maintenance. Key recommendations include raising internal awareness about AI threats during design, securing supply chains during development, and monitoring for behavioral changes indicating compromise during operation and maintenance.

 

Emphasizing a holistic risk assessment, providers are urged to integrate security seamlessly with functionality and user experience from the outset. Secure supply chains, incident response plans, and easy-to-use secure systems are highlighted during development and deployment phases.

 

Documentation becomes a cornerstone, with providers required to maintain comprehensive records for models, datasets, and more to enhance accountability. The guidelines advocate for “secure by default” systems, emphasizing a culture of security, information sharing, and transparency about limitations to mitigate risks for users.

 

In conclusion, these guidelines seek to establish a robust foundation that promotes secure AI systems, with providers taking responsibility for downstream users and actively participating in a collaborative culture of security.

 

If you’re curious how these guidelines could impact you or your company, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter