NIST Releases New Framework to Address Risks of Generative AI

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/21/2024
In News

The National Institute of Standards and Technology (NIST) has released a new profile under its Artificial Intelligence Risk Management Framework (AI RMF), specifically addressing the unique challenges posed by Generative Artificial Intelligence (GAI). The document, titled Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, was developed in response to President Biden’s Executive Order 14110, which emphasizes the importance of ensuring that AI technologies are safe, secure, and trustworthy.

 

The profile serves as a companion resource to the AI RMF 1.0, which was released in January 2023, and is designed to help organizations incorporate trustworthiness into the design, development, and deployment of AI systems. It provides a comprehensive overview of the risks associated with GAI, offering guidance on how to govern, map, measure, and manage these risks effectively across various stages of the AI lifecycle.

 

GAI, which refers to AI models capable of generating content such as text, images, videos, and audio, presents a range of risks that differ from traditional AI systems. These risks include confabulation, where the AI generates erroneous or misleading content; the creation of harmful, violent, or hateful content; and privacy concerns arising from the unauthorized use or leakage of sensitive data. The profile also highlights the environmental impacts of training and deploying GAI models, noting the significant energy consumption associated with these activities.

 

One of the key features of the NIST profile is its focus on cross-sectoral applications of GAI, recognizing that these AI models are increasingly being used across diverse industries. The document identifies specific actions organizations can take to mitigate GAI risks, such as establishing transparency policies, conducting pre-deployment testing, and implementing robust incident disclosure procedures. It also emphasizes the importance of ongoing monitoring and review of AI systems to ensure that they continue to operate safely and effectively.

 

The development of this profile was informed by public feedback and consultations with stakeholders from various sectors, including members of NIST’s Generative AI Public Working Group. This collaborative approach reflects NIST’s commitment to creating a framework that is not only technically rigorous but also aligned with the needs and concerns of the broader AI community.

 

As GAI continues to evolve, NIST plans to update this profile to reflect new insights and emerging risks. Future revisions will incorporate additional AI RMF subcategories and suggested actions based on empirical evidence and the evolving landscape of AI technologies. This new profile marks a significant step forward in the management of AI risks, particularly those associated with the rapidly advancing field of generative AI.

 

Need Help?



If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter