U.S. and U.K. Cybersecurity Agencies Release Joint AI Guidelines

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/08/2023
In News

U.S. and U.K. Cybersecurity Agencies Release Joint AI Guidelines

Amidst EU AI Act news, collaborative efforts from departments in the U.S. and U.K. have yielded joint guidelines on AI. The “Guidelines for Secure AI System Development” was released on November 26. It comes from the U.S. Department of Homeland Security’s Cybersecurity Infrastructure Security Agency. As well as the U.K.‘s National Cyber Security Centre.

 

These guidelines respond to the heightened value of AI as a target and the potential for AI to be weaponized. Targeted at AI system providers, whether developing in-house or integrating external components, the principles underscore a “secure by design” approach, emphasizing security ownership and transparency.

 

Spanning a comprehensive 20-page document, the guidelines cover the entire AI system life cycle, encompassing design, development, deployment, operation, and maintenance. Key recommendations include raising internal awareness about AI threats during design, securing supply chains during development, and monitoring for behavioral changes indicating compromise during operation and maintenance.

 

Emphasizing a holistic risk assessment, providers are urged to integrate security seamlessly with functionality and user experience from the outset. Secure supply chains, incident response plans, and easy-to-use secure systems are highlighted during development and deployment phases.

 

Documentation becomes a cornerstone, with providers required to maintain comprehensive records for models, datasets, and more to enhance accountability. The guidelines advocate for “secure by default” systems. Guidelines emphasize a culture of security, information sharing, and transparency about limitations to mitigate risks for users.

 

In conclusion, these guidelines seek to establish a robust foundation that promotes secure AI, with providers taking responsibility for downstream users and actively participating in a collaborative culture of security.

 

Need Help?

 

If you’re curious how these guidelines could impact you or your company, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter