NIST Releases Groundbreaking Framework for Managing Risks in Generative AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/01/2024
In News

UPDATE — JULY 2025: The article below accurately summarizes the initial public draft of the NIST Generative AI Risk Management Profile released in April 2024. However, that draft was officially withdrawn and replaced by the finalized NIST AI 600-1, titled “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” on July 26, 2024. The final version is now the authoritative guidance and should be used for all compliance, governance, and implementation decisions.

While the draft and final versions share the same 12 core generative AI risk categories—including hallucination, bias, IP misuse, and toxic content—the finalized document may contain refinements in terminology, definitions, and action steps based on public feedback. The four-part structure (Govern, Map, Measure, Manage) remains intact.

Readers should refer directly to NIST AI 600-1 for the most current and accurate guidance on managing generative AI risks. The April 2024 draft is archived for historical context only. No contradictory information was found as of July 2025.

ORIGINAL NEWS STORY:

NIST Releases Groundbreaking Framework for Managing Risks in Generative AI

The National Institute of Standards and Technology (NIST) has released an initial public draft that outlines the Artificial Intelligence Risk Management Framework, with a specific focus on Generative AI. The document delves into various risks associated with AI, including confabulation, dangerous or violent recommendations, data privacy, environmental concerns, human-AI configuration, information integrity, information security, intellectual property issues, obscene or abusive content, toxicity, bias, and homogenization, as well as value chain and component integration.

 

Identifying and Describing

 

One of the key aspects of the document is the identification and description of these risks, highlighting the potential challenges and implications they pose to organizations utilizing AI technologies. By categorizing and detailing these risks, the document aims to provide a clear understanding of the multifaceted nature of AI-related risks and the importance of addressing them proactively.

 

Furthermore, the document offers a set of actions that organizations can take to govern, map, measure, and manage these risks effectively. These actions are designed to help organizations develop robust AI risk management strategies that encompass various aspects of AI deployment, from technical considerations to societal impacts. The document also emphasizes the significance of feedback and engagement from stakeholders in refining and improving the framework. NIST welcomes feedback on glossary terms, risk categorization, and actions proposed in the document, encouraging a collaborative approach to enhancing AI risk management practices.

 

In addition to outlining risks and actions, the document provides insights into emerging trends and challenges in the AI landscape. It references research studies and publications that explore topics such as the impact of AI on cybersecurity, the safety of generative AI in mental health applications, algorithm aversion among users, and the influence of altered images on machine vision and human perception.

 

Moreover, the document references external sources that discuss the ethical implications of AI, such as racial bias in AI recruitment tools, human perceptions of generative AI, and the prevalence of hallucinations in large language models. These references underscore the importance of ethical considerations and bias mitigation strategies in AI development and deployment.

 

Conclusion

 

Overall, the latest document from the NIST serves as a valuable resource for organizations seeking to navigate the complex landscape of AI risks and opportunities. By providing a structured framework for understanding and addressing AI-related risks, the document empowers organizations to make informed decisions and implement effective risk management practices in their AI initiatives.

 

Need Help?

 

If you’re wondering how the NIST and any other government bodies examining AI, could impact you, reach out to BABL AI. Their Audit Experts are ready to help you with your concerns and questions while providing valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter