UPDATE — AUGUST 2025: Since publishing the Generative AI Profile (AI 600-1) in 2024, NIST has finalized NIST AI 800-1: A Guide to Managing the Misuse Risk of Foundation Models. Released in mid-2025, AI 800-1 builds on the AI Risk Management Framework (AI RMF 1.0) and introduces more prescriptive safeguards against dual-use and malicious applications. These include refined misuse taxonomies, capability testing, watermarking, access tiers, and incident reporting. This final guide now stands alongside the AI RMF 1.0 and the Generative AI Profile as one of NIST’s cornerstone resources for trustworthy AI governance.
ORIGINAL NEWS STORY:
NIST Releases New Framework to Address Risks of Generative AI
The National Institute of Standards and Technology (NIST) has released a new profile under its Artificial Intelligence Risk Management Framework (AI RMF), specifically addressing the unique challenges posed by Generative Artificial Intelligence (GAI). The document, titled Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, was developed in response to President Biden’s Executive Order 14110, which emphasizes the importance of ensuring that AI technologies are safe, secure, and trustworthy.
Key Risks Identified in Generative AI
Generative AI models can produce text, images, audio, and video with remarkable accuracy. However, this same capability introduces new risks that differ from traditional AI systems. The NIST profile highlights several of the most pressing concerns:
-
Confabulation, in which AI generates false or misleading information.
-
Harmful or hateful content, including violent or biased material.
-
Privacy risks, such as the leakage of sensitive or copyrighted data.
-
Environmental impacts, particularly the high energy use required to train large AI models.
NIST’s profile explains that these risks must be mapped, measured, and managed across the full lifecycle of an AI system—from development to deployment and continuous monitoring.
Practical Guidance for Organizations
To reduce these risks, the profile recommends specific actions that organizations can take immediately. These include:
-
Creating transparency and accountability policies.
-
Conducting pre-deployment testing of generative models.
-
Establishing incident disclosure procedures and monitoring systems.
NIST also stresses that GAI tools should undergo regular review and testing to ensure their ongoing safety, accuracy, and fairness.
The document takes a cross-sectoral approach, acknowledging that generative AI is now embedded across industries such as education, healthcare, and finance. By providing flexible guidance, NIST ensures that the framework can be applied in both public- and private-sector settings.
Collaborative Development and Future Updates
NIST developed the Generative AI Profile through broad stakeholder collaboration, engaging the Generative AI Public Working Group and experts from academia, government, and industry. This process helped align technical precision with real-world ethical and operational needs. Looking ahead, NIST plans to update the profile regularly as AI technologies evolve. Future versions will include new research, public input, and expanded coverage of emerging issues such as AI governance, bias mitigation, and dual-use risks. This profile represents a major milestone in U.S. AI governance. It reinforces the federal commitment to safe, responsible innovation and complements NIST’s AI RMF 1.0 as a foundation for ongoing policy, compliance, and risk management efforts.
Need Help?
If you have questions about how to navigate U.S. or global AI regulations, contact BABL AI. Their Audit Experts can help your organization interpret frameworks like NIST’s AI RMF, assess risk, and strengthen compliance strategies for trustworthy AI.

