Profiling the Future: NIST’s Framework Extensions for Generative Al

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 06/04/2024
In Blog

UPDATE — AUGUST 2025: This blog offers a solid overview of NIST’s approach to generative AI risk management, but readers should note that the work described is ongoing and still in development as of mid-2025.

NIST is actively adapting its AI Risk Management Framework (AI RMF) to address the unique risks of generative AI through the development of dedicated profiles, but these profiles are not yet finalized or published as standalone documents. Rather, they are being shaped through ongoing stakeholder engagement, pilot programs, and evolving guidance.

Key updates include:

  • NIST’s ARIA (Assessing Risks and Impacts of AI) program, launched in May 2024, is a major initiative that operationalizes the AI RMF with real-world testing of AI systems—including generative AI—across sociotechnical contexts.

  • The AI RMF Version 1.0, released in January 2023, remains the foundational framework. Generative AI profiles are now being developed under its structure, focused on managing risks like misinformation, bias, and intellectual property misuse.

  • As of mid-2025, no standalone “generative AI profile” has been released, but the concept is actively evolving through workshops, public feedback, and interdisciplinary research collaborations.

The blog accurately captures the goals and strategic direction of NIST’s generative AI work—particularly the emphasis on transparency, ethical use, and cross-border relevance—but readers should understand this as a visionary and ongoing effort, not a completed regulatory framework.

For authoritative updates, readers can consult:

If you’d like, I can help summarize the latest draft materials, workshop recordings, or public consultation notes from NIST related to generative AI.

 

ORIGINAL BLOG POST:

 

Profiling the Future: NIST’s Framework Extensions for Generative Al

 

As artificial intelligence (AI) continues to evolve, so too must the frameworks that govern its use. Recognizing the unique challenges and potential of generative AI, the National Institute of Standards and Technology (NIST) is at the forefront of developing detailed profiles within its AI Risk Management Framework specifically tailored for this advanced subset of AI technologies. This initiative not only highlights the intricacies of generative AI but also sets a precedent for how such technologies should be managed and regulated.

 

What is Generative AI?

 

Generative AI refers to algorithms capable of creating content, such as text, images, and videos, from existing data sets. This technology goes beyond simple data analysis to actually generate new, original outputs. Applications range from creating art and music to generating realistic human voices and writing coherent text passages. The power of generative AI lies in its ability to learn from vast amounts of data and produce outputs that mimic human creativity and reasoning.

 

The NIST AI Risk Management Framework: Adapting to Generative AI

 

The NIST framework, traditionally known for its comprehensive approach to AI governance, is expanding to include specific profiles that address the nuances of generative AI. This adaptation is crucial, given the distinct risks associated with these technologies, such as the potential for creating misleading information or infringing on intellectual property rights.

 

Creating Detailed Profiles for Generative AI

 

The process of developing these profiles involves several key steps:

 

  • Stakeholder Collaboration: Engaging with industry experts, researchers, and policymakers to gather diverse insights into the unique aspects of generative AI.
  • Risk Assessment: Identifying specific risks associated with generative AI, including ethical concerns, data integrity issues, and potential misuse.
  • Guideline Formulation: Drafting detailed guidelines that provide clear protocols for mitigating risks, ensuring ethical usage, and promoting transparency.

Implications of the Generative AI Profiles

 

The creation of dedicated profiles for generative AI within the NIST framework carries profound implications for multiple stakeholders:

 

  • For Regulators: These profiles offer a blueprint for crafting regulations that address the specific challenges posed by generative AI, facilitating more informed and effective governance.
  • For Developers: Access to detailed guidelines helps developers understand and manage the risks associated with their technologies, fostering innovation within a secure and ethical framework.
  • For Users: With clearer standards, users can trust the safety and reliability of generative AI applications, which is crucial for widespread adoption.

 

Challenges and Opportunities

 

The biggest challenge is speed. AI evolves faster than governance frameworks. But this also creates an opportunity to refine standards continuously. Generative AI’s cross-border nature adds complexity. NIST’s work could influence global cooperation and harmonization of rules.

 

The Road Ahead

 

Extending the AI RMF to generative AI is about more than reducing risks. It creates a pathway for safe innovation and long-term public trust. With structured guidelines, NIST helps ensure AI develops in ways that benefit society.

 

Need Help? 

If you’re wondering how NIST AI Framework, and other AI regulations around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter