UPDATE — AUGUST 2025: The NIST draft guide “Managing Misuse Risk for Dual-Use Foundation Models” (NIST AI 800-1) described in this story is still an active and current initiative as of August 14, 2025—but the details here reflect its first public draft from July 2024. NIST released a second public draft in January 2025, incorporating feedback from more than 70 experts across industry, academia, and civil society. The comment period for this updated version is open through March 15, 2025, not September 2024. The latest draft is from January 2025 and remains open for public comment, making it a live part of the U.S. AI regulatory conversation.
The guide remains a voluntary best-practice framework for organizations across the AI supply chain to identify, measure, and mitigate misuse risks—especially for models that could be adapted for harmful purposes such as chemical, biological, radiological, or nuclear weapon development, offensive cyber operations, or generating abusive content. The second draft expands guidance to more stakeholders, refines risk measurement objectives, and provides additional safeguards such as secure model hosting, controlled API access, and improved misuse detection.
ORIGINAL NEWS STORY:
New NIST Guide Addresses Misuse Risks of Dual-Use AI Models; Public Feedback Sought
The National Institute of Standards and Technology (NIST) has released a draft guide aimed at managing the misuse risks associated with dual-use foundation models. This guide, known as NIST AI 800-1, is available for public comment until September 9, 2024. The document, published by the U.S. AI Safety Institute, is part of an ongoing effort to ensure that AI technologies, particularly those capable of both beneficial and potentially harmful applications, are developed and deployed responsibly.
Dual-use foundation models are powerful AI systems that can be adapted for a variety of purposes. While these models offer significant benefits in areas like healthcare, cybersecurity, and more, they also pose risks if misused. For example, they could potentially be used to develop chemical, biological, radiological, or nuclear weapons, conduct offensive cyber operations, or generate harmful content such as child sexual abuse material or non-consensual intimate imagery. The NIST AI 800-1 guide addresses these risks by providing a framework for organizations to manage and mitigate them throughout the AI lifecycle.
The document outlines several challenges in managing the misuse risks of foundation models. One major challenge is the broad applicability of these models, which makes it difficult to anticipate all potential misuses. Additionally, the capabilities of these models do not always clearly translate across different domains, making it challenging to predict how they might be misused.
To address these challenges, NIST AI 800-1 offers a set of objectives and practices for organizations to follow. These include anticipating potential misuse risks, managing the risks of model theft, measuring misuse risk, and ensuring that misuse risks are managed before deploying foundation models. The guidelines also emphasize the importance of transparency, urging organizations to provide regular reports on how they are managing these risks.
In addition to the guidelines, NIST has also released a glossary of key terms and examples of safeguards that organizations can implement to prevent misuse. These safeguards include filtering training data, limiting access to model capabilities, and implementing security measures to prevent model theft.
The public is encouraged to review the draft and submit comments to NIST by the September 9, 2024, deadline. This feedback will be crucial in shaping the final version of the guide and ensuring that it effectively addresses the complex and evolving landscape of AI risks. For more information on the draft guide and how to submit comments, visit the NIST website or contact the U.S. AI Safety Institute directly.
Need Help?
If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.