New Report Highlights the Emergence and Role of AI Safety Institutes as Key Players in Global AI Governance

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/24/2024
In News

A comprehensive report published this month by the Institute for AI Policy and Strategy (IAPS) highlights the establishment of AI Safety Institutes (AISIs) as pivotal in managing the rapidly growing risks associated with artificial intelligence (AI). The report, titled “Understanding the First Wave of AI Safety Institutes,” provides critical insights into the formation and functions of these institutes across major global regions, including the UK, US, Japan, and the European Union.

 

The AI Safety Institutes, established over the past year, represent a significant step forward in managing the risks of AI systems. The report from IAPS stresses that as AI technology accelerates, with applications in sectors ranging from healthcare to national security, the need for dedicated safety oversight has become more pressing than ever.

 

The report traces the origins of the first AI Safety Institutes to announcements made by the governments of the UK and US in late 2023. Since then, other regions, including Japan, Canada, and several EU countries, have followed suit in setting up similar organizations. According to the IAPS report, these institutes are state-backed entities that focus exclusively on ensuring AI technologies are developed and deployed safely.

 

The report notes that AISIs are designed to serve as independent, governmental organizations with a high degree of technical expertise. These institutes are tasked with evaluating AI systems, ensuring that the development of these technologies complies with safety standards, and mitigating any potential risks.

 

According to the report, the first wave of AI Safety Institutes is defined by three key characteristics:

 

    1. Safety-Driven: The primary objective of these institutes is to assess and mitigate the safety risks associated with advanced AI systems, especially those that pose significant dangers, such as the potential for cyberattacks or misuse in biotech.
    1. Government-Backed: The AISIs are typically part of governmental or public bodies, allowing them to operate with the authority necessary to enforce compliance with safety standards and oversee AI development in both public and private sectors.
    1. Highly Technical Expertise: These institutes house some of the world’s leading experts in AI and AI safety, focusing on advancing the science of safety and applying cutting-edge research to real-world AI applications.

The report by IAPS highlights the three main functions of AISIs: conducting research, developing standards, and promoting international cooperation.

 

  • Research: The AISIs conduct technical research to advance the understanding of AI safety. This research focuses on evaluating the risks posed by AI systems and proposing solutions to prevent these risks from materializing. The UK’s AISI, for instance, has been at the forefront of research into mitigating AI’s potential misuse in cyberattacks, while the US AISI, led by NIST, has concentrated on safety evaluation tools and protocols.

 

  • Setting Standards: A key role of AISIs is to contribute to the development of global safety standards for AI. This ensures that AI technologies deployed across industries meet rigorous safety criteria. The report highlights the importance of harmonized standards that can be adopted internationally, ensuring a consistent approach to AI safety.

 

  • Promoting Cooperation: AISIs also play a central role in fostering global cooperation on AI safety. The report emphasizes the importance of collaboration among governments, academic institutions, and industry to share knowledge and best practices. The creation of the International Network of AI Safety Institutes, launched in 2024, is an example of these collaborative efforts.

 

The report acknowledges several challenges that these institutes face. One of the primary concerns is ensuring that AISIs remain impartial and not overly influenced by the private sector. There is also the issue of balancing safety evaluations with fostering AI innovation. Some critics argue that AISIs could be too narrowly focused on safety at the expense of broader concerns like fairness, transparency, and bias.

 

Furthermore, the report points out that AISIs will need to address the growing complexity of AI technologies, as newer models, like generative AI, become more sophisticated. Ensuring that these technologies comply with safety standards without stifling innovation will be a major challenge for the institutes moving forward.

 

The IAPS report concludes that the first wave of AI Safety Institutes marks a proactive effort by governments worldwide to keep pace with the rapid development of AI technologies. As these institutes evolve, they are expected to play a key role in shaping global AI governance, ensuring that AI systems are not only innovative but also safe, secure, and aligned with societal values.

 

 

Need Help?

 

Keeping track of all the AI regulations, laws and other policies around the globe can be difficult, especially when they can impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter