In the world of AI, governance and risk management are critical issues that are receiving increasing attention. As AI systems become more advanced and are deployed in a wider range of applications, it’s essential to have frameworks and standards in place to ensure these systems are developed and used in a safe, trustworthy, and responsible manner.
One significant development in this area is the launch of the AI Safety Institute Consortium by the National Institute of Standards and Technology. This consortium was established in response to an executive order from President Joe Biden on promoting safe and trustworthy AI. The AI Safety Institute’s primary objective is to develop standards and measurement science for AI systems, with a particular focus on generative AI. While generative AI, such as large language models, is a key area of interest, the consortium recognizes the need to measure and assess the capabilities of various AI systems, not just language models.
BABL AI is excited to be part of this consortium. The consortium brings together a diverse group of stakeholders, including private companies, non-profits, and auditing firms, to work collaboratively on AI governance and risk management. The AI Safety Institute Consortium has already established five working groups, and BABL AI will contribute to several of them. In particular, BABL AI will focus on testing AI systems for risky behavior and general performance, as well as the governance frameworks around these systems.
Another exciting development in the field of AI governance is the release of ISO/IEC 42001, an international standard for AI risk management and governance. While not entirely complete, this standard provides a solid foundation for managing AI systems more effectively. ISO/IEC 42001 offers several advantages for organizations developing cutting-edge AI and seeking to build trust with customers:
- It’s from an internationally accepted standards body, lending credibility and recognition.
- It’s relatively lightweight, making it less intimidating for organizations to start implementing.
- It’s designed to be auditable, allowing organizations to obtain certifications to demonstrate compliance.
- It serves as an excellent gateway toward compliance with the upcoming EU AI Act, which is expected to have significant regulatory implications.
- It provides a clear way for organizations to demonstrate the trustworthiness of their AI systems.
Even if an organization is not ready to fully implement ISO/IEC 42001 immediately, they can start by focusing on a few key controls and gradually building up their compliance. One recommended approach is to begin by establishing a basic governance committee, followed by implementing a simple risk assessment process. Additionally, organizations should consider upskilling select team members on responsible AI practices and principles. Overall, ISO/IEC 42001 is a great start toward preparing for the EU AI Act.
It can be overwhelming to keep track of all the AI regulations around the world, so don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights while answering your questions and concerns.