New NIST Guide Addresses Misuse Risks of Dual-Use AI Models; Public Feedback Sought

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

The National Institute of Standards and Technology (NIST) has released a draft guide aimed at managing the misuse risks associated with dual-use foundation models. This guide, known as NIST AI 800-1, is available for public comment until September 9, 2024. The document, published by the U.S. AI Safety Institute, is part of an ongoing effort to ensure that AI technologies, particularly those capable of both beneficial and potentially harmful applications, are developed and deployed responsibly.

 

Dual-use foundation models are powerful AI systems that can be adapted for a variety of purposes. While these models offer significant benefits in areas like healthcare, cybersecurity, and more, they also pose risks if misused. For example, they could potentially be used to develop chemical, biological, radiological, or nuclear weapons, conduct offensive cyber operations, or generate harmful content such as child sexual abuse material or non-consensual intimate imagery. The NIST AI 800-1 guide addresses these risks by providing a framework for organizations to manage and mitigate them throughout the AI lifecycle.

 

The document outlines several challenges in managing the misuse risks of foundation models. One major challenge is the broad applicability of these models, which makes it difficult to anticipate all potential misuses. Additionally, the capabilities of these models do not always clearly translate across different domains, making it challenging to predict how they might be misused.

 

To address these challenges, NIST AI 800-1 offers a set of objectives and practices for organizations to follow. These include anticipating potential misuse risks, managing the risks of model theft, measuring misuse risk, and ensuring that misuse risks are managed before deploying foundation models. The guidelines also emphasize the importance of transparency, urging organizations to provide regular reports on how they are managing these risks.

 

In addition to the guidelines, NIST has also released a glossary of key terms and examples of safeguards that organizations can implement to prevent misuse. These safeguards include filtering training data, limiting access to model capabilities, and implementing security measures to prevent model theft.

 

The public is encouraged to review the draft and submit comments to NIST by the September 9, 2024, deadline. This feedback will be crucial in shaping the final version of the guide and ensuring that it effectively addresses the complex and evolving landscape of AI risks. For more information on the draft guide and how to submit comments, visit the NIST website or contact the U.S. AI Safety Institute directly.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter