UPDATE – FEBRUARY 2026:
Additional shifts have occurred in the U.S. federal AI governance landscape. Although President Biden’s Executive Order formally launching the U.S. AI Safety Institute was rescinded in early 2025, the technical work has continued under the Department of Commerce through NIST’s Center for AI Standards and Innovation (CAISI). The consortium structure and working groups originally formed under AISIC remain active. There is continued focus on red-teaming, model evaluations, measurement science, cybersecurity coordination, and standards development. The branding and political framing have evolved, but the core technical collaboration across industry, academia, and government persists.
ISO/IEC 42001:2023 remains the current and authoritative international AI management systems standard. Since its publication in December 2023, adoption has accelerated significantly. Major technology vendors now publicly align their governance programs with ISO/IEC 42001. Certification pathways are increasingly used as signals of trustworthy AI practices. The standard is now widely treated as complementary to the NIST AI Risk Management Framework. As well as a practical foundation for EU AI Act readiness, rather than merely a “lightweight starting point.”
Overall, this blog post remains substantively accurate. However, readers should understand that U.S. AI governance efforts have shifted in structure and terminology since 2025, and ISO/IEC 42001 has matured from a newly released standard into a widely adopted governance benchmark within the global AI compliance ecosystem.
ORIGINAL BLOG POST:
2024’s Exciting Developments in AI Governance and Risk Management
In the world of AI, governance and risk management are critical issues that are receiving increasing attention. As AI systems become more advanced and are deployed in a wider range of applications, it’s essential to have frameworks and standards in place to ensure these systems are developed and used in a safe, trustworthy, and responsible manner.
The NIST’s AI Safety Institute
One significant development in this area is the launch of the AI Safety Institute Consortium by the National Institute of Standards and Technology. This consortium was established in response to an executive order from President Joe Biden on promoting safe and trustworthy AI. The AI Safety Institute’s primary objective is to develop standards and measurement science for AI systems, with a particular focus on generative AI. While generative AI, such as large language models, is a key area of interest, the consortium recognizes the need to measure and assess the capabilities of various AI systems, not just language models.
BABL AI is excited to be part of this consortium. The consortium brings together a diverse group of stakeholders, including private companies, non-profits, and auditing firms, to work collaboratively on AI governance and risk management. The AI Safety Institute Consortium has already established five working groups, and BABL AI will contribute to several of them. In particular, BABL AI will focus on testing AI systems for risky behavior and general performance, as well as the governance frameworks around these systems.
ISO’s Big Release
Another exciting development in the field of AI governance is the release of ISO/IEC 42001, an international standard for AI risk management and governance. While not entirely complete, this standard provides a solid foundation for managing AI systems more effectively. ISO/IEC 42001 offers several advantages for organizations developing cutting-edge AI and seeking to build trust with customers:
- It’s from an internationally accepted standards body, lending credibility and recognition.
- It’s relatively lightweight, making it less intimidating for organizations to start implementing.
- It’s designed to be auditable, allowing organizations to obtain certifications to demonstrate compliance.
- It serves as an excellent gateway toward compliance with the upcoming EU AI Act, which is expected to have significant regulatory implications.
- It provides a clear way for organizations to demonstrate the trustworthiness of their AI systems.
Even if an organization is not ready to fully implement ISO/IEC 42001 immediately, they can start by focusing on a few key controls and gradually building up their compliance. One recommended approach is to begin by establishing a basic governance committee, followed by implementing a simple risk assessment process. Additionally, organizations should consider upskilling select team members on responsible AI practices and principles. Overall, ISO/IEC 42001 is a great start toward preparing for the EU AI Act.
Need Help?
It can be overwhelming to keep track of all the AI regulations around the world. So, don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights while answering your questions and concerns.


