New NIST Guide Addresses Misuse Risks of Dual-Use AI Models; Public Feedback Sought

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

UPDATE — AUGUST 2025: The NIST draft guide “Managing Misuse Risk for Dual-Use Foundation Models” (NIST AI 800-1) described in this story is still an active and current initiative as of August 14, 2025. The details below reflect its first public draft from July 2024. NIST released a second draft in January 2025, incorporating feedback from more than 70 experts across industry, academia, and civil society. The comment period for this updated version runs through March 15, 2025, not September 2024.

The guide remains a voluntary best-practice framework for organizations across the AI supply chain. It helps identify, measure, and mitigate misuse risks, especially for models that could be adapted for harmful purposes such as CBRN weapon development, offensive cyber operations, or abusive content. The second draft expands guidance to more stakeholders, refines risk measurement objectives, and introduces safeguards like secure hosting, controlled API access, and improved misuse detection.

ORIGINAL NEWS STORY:

New NIST Guide Addresses Misuse Risks of Dual-Use AI Models; Public Feedback Sought

 

The National Institute of Standards and Technology (NIST) has released a draft guide aimed at managing the misuse risks associated with dual-use foundation models. This guide, known as NIST AI 800-1, is available for public comment until September 9, 2024. The document, published by the U.S. AI Safety Institute, is part of an ongoing effort to ensure that AI technologies, particularly those capable of both beneficial and potentially harmful applications, are developed and deployed responsibly.

 

Dual-use foundation models are powerful AI systems that can be adapted for a variety of purposes. While these models offer significant benefits in areas like healthcare, cybersecurity, and more, they also pose risks if misused. For example, they could potentially be used to develop chemical, biological, radiological, or nuclear weapons, conduct offensive cyber operations, or generate harmful content such as child sexual abuse material or non-consensual intimate imagery. The NIST AI 800-1 guide addresses these risks by providing a framework for organizations to manage and mitigate them throughout the AI lifecycle.

 

The document outlines several challenges in managing the misuse risks of foundation models. One major challenge is the broad applicability of these models, which makes it difficult to anticipate all potential misuses. Additionally, the capabilities of these models do not always clearly translate across different domains, making it challenging to predict how they might be misused.

 

Objectives and Practices

 

To address these issues, NIST AI 800-1 outlines objectives and practices for organizations. These include:

  • Anticipating potential misuse.

  • Managing the risk of model theft.

  • Measuring misuse risk.

  • Ensuring safeguards are in place before deployment.

The guidelines also stress transparency. Organizations should publish regular reports on how they manage misuse risk.

 

Glossary and Safeguards

 

The draft includes a glossary of key terms and practical safeguards to prevent misuse. These safeguards include filtering training data, limiting access to sensitive model features, and strengthening security to reduce the risk of model theft.

 

Public Participation

 

NIST encourages the public to review the draft and submit comments by the September 9, 2024, deadline. Feedback will shape the final version of the guide and ensure it addresses the complex and evolving risks of AI. More information, including instructions for submitting comments, is available on the NIST website or through the U.S. AI Safety Institute.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter