A comprehensive report published this month by the Institute for AI Policy and Strategy (IAPS) highlights the establishment of AI Safety Institutes (AISIs) as pivotal in managing the rapidly growing risks associated with artificial intelligence (AI). The report, titled “Understanding the First Wave of AI Safety Institutes,” provides critical insights into the formation and functions of these institutes across major global regions, including the UK, US, Japan, and the European Union.
The AI Safety Institutes, established over the past year, represent a significant step forward in managing the risks of AI systems. The report from IAPS stresses that as AI technology accelerates, with applications in sectors ranging from healthcare to national security, the need for dedicated safety oversight has become more pressing than ever.
Origins and Purpose of the First AISIs
According to IAPS, the UK and US created the earliest safety institutes. Soon after, Canada, Japan, and several EU countries started their own. The report describes AISIs as public or government-supported organizations with technical expertise and a specific mandate: ensure that AI systems operate safely. IAPS notes that these institutes share three defining traits. First, they focus on safety and concentrate on reducing risks tied to advanced AI systems, including cyber misuse or dangerous applications in biotech. Second, they operate with government backing, which gives them the authority to enforce safety expectations across public and private sectors. Third, they employ highly technical teams capable of evaluating cutting-edge AI models.
Core Functions: Research, Standards, and Global Cooperation
The report outlines three core functions of modern AISIs.
- Research: Institutes run technical studies to understand AI system behavior and identify emerging risks. For example, the UK’s AISI has examined how AI could support cyberattacks, while the US institute, housed at NIST, has focused on new safety evaluation tools.
- Setting Standards: AISIs push for clear and rigorous safety standards that can guide industry practices. IAPS emphasizes that harmonized standards are essential for global consistency, especially as companies deploy AI across borders.
- International Cooperation: The institutes encourage collaboration among governments, academic researchers, and industry partners. IAPS highlights the International Network of AI Safety Institutes, formed in 2024, as an example of how countries are coordinating shared safety goals.
Challenges Facing the First Wave of Institutes
The report also identifies several challenges. Maintaining independence from private companies remains a concern, as AISIs must avoid undue influence that could weaken safety rules. Balancing innovation and regulation is another issue; some critics argue that strict safety reviews might overshadow other important concerns, including fairness, bias, and transparency. IAPS also notes that newer AI models, including generative systems, introduce additional complexity. Institutes must keep pace with rapid advancements while avoiding rules that slow responsible innovation.
AISIs Expected to Play an Ongoing Global Role
IAPS concludes that the first generation of AI Safety Institutes represents a major step in global AI governance. As the institutes strengthen their research, standards, and partnerships, they are expected to shape how countries manage AI risks for years to come. Their work aims to ensure that AI remains not only innovative, but also safe, secure, and aligned with public values.
Need Help?
Keeping track of all the AI regulations, laws and other policies around the globe can be difficult, especially when they can impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.


