Global Collaboration on AI Safety Launched with Inaugural Convening in San Francisco

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/21/2024
In News

UPDATE — SEPTEMBER 2025:

Since the U.S. Department of Commerce and State Department launched the International Network of AI Safety Institutes (INASI) in November 2024, the initiative has grown into a central hub for multilateral AI safety coordination. Its first major milestone came during the Paris AI Action Summit in February 2025, where INASI expanded membership to more than 25 government-affiliated institutes, including Canada, Japan, and the EU. At the summit, members presented early findings from their joint evaluation of Meta’s Llama 3.1 model and agreed to harmonize testing protocols.

Through spring and summer 2025, INASI also secured new financial backing. The UK pledged £10 million for collaborative research, Japan committed roughly ¥1.5 billion for multilingual evaluation standards, and philanthropic organizations like Open Philanthropy added funding to support synthetic content detection. These contributions built on the initial $11 million pledged at launch and broadened the scope of INASI’s research efforts.

The risk assessment framework introduced in late 2024 was refined in mid-2025 to align with both the OECD’s AI risk classification system and the EU AI Act’s taxonomy. A crosswalk document was circulated to reduce duplication and foster interoperability between regional safety regimes. Meanwhile, the TRAINS Taskforce, led by U.S. defense agencies, carried out its first red-team exercises on advanced language models, with public summaries released in July noting vulnerabilities in cyber defense contexts and strategies for resilience.

Another milestone was the release in August 2025 of an open multilingual benchmark suite, covering more than 20 languages, with special emphasis on low-resource linguistic contexts. This dataset, designed collaboratively across INASI members, was framed as foundational for ensuring cultural and linguistic diversity in global AI safety evaluations.

 

ORIGINAL NEWS STORY:

 

Global Collaboration on AI Safety Launched with Inaugural Convening in San Francisco

 

The U.S. Department of Commerce and the U.S. Department of State launched the International Network of AI Safety Institutes (INASI) during its inaugural convening on November 20, 2024. The initiative aims to strengthen global cooperation on the safe development and use of artificial intelligence. By bringing nations together, INASI seeks to reduce risks while supporting responsible innovation.

Building a Foundation for Shared AI Safety Work

The two-day gathering brought government officials, industry leaders, academic researchers, and civil society groups into one room. Together, they laid the groundwork for long-term, international collaboration. The Network’s mission centers on building a shared scientific understanding of AI safety risks and creating best practices for testing and evaluating advanced models. This coordinated approach is intended to ensure AI technologies benefit societies while minimizing harm.

 

  1. Mission Statement and Priority Areas

 

During the event, members adopted a joint mission statement that stressed the importance of cultural and linguistic diversity in AI safety work. They also identified four priority areas: advancing AI safety research, developing best practices for model testing, creating a unified approach to risk assessments, and ensuring global inclusion in AI development and oversight.

 

  1. Funding for Synthetic Content Research

 

INASI also announced funding commitments totaling $11 million from governments and philanthropic partners. These resources will support research on synthetic content—AI-generated material that can enable impersonation, fraud, and other misuse. Key contributions include:

 

    • Private Sector Funding: Contributions from the Knight Foundation, AI Safety Fund, and others to support interdisciplinary approaches to addressing these challenges.

 

  1. Multilateral Testing Insights

 

Another highlight of the convening was the Network’s first joint testing effort, which evaluated Meta’s Llama 3.1 model. The assessment focused on multilingual performance, hallucination behavior, and general academic knowledge. The findings will inform broader evaluations ahead of the AI Action Summit in Paris in February 2025.

 

  1. Risk Assessment Framework

 

The Network issued a joint statement proposing a six-pillar framework for assessing risks associated with advanced AI systems. This includes transparency, actionability, and reproducibility. It builds on international agreements like the Bletchley Declaration and Seoul Statement of Intent, aiming to align global AI safety practices.

 

  1. TRAINS Taskforce

 

The U.S. announced the Testing Risks of AI for National Security (TRAINS) Taskforce, a collaborative effort involving federal agencies, including the Department of Defense and the NSA. The initiative will focus on safeguarding national security while advancing AI innovation in critical domains like cybersecurity and infrastructure.

 

The convening also set the stage for further international cooperation at the upcoming AI Action Summit in France. As the inaugural chair of INASI, the United States seeks to create a cohesive global framework for AI safety that encourages both innovation and responsible governance.

 

 

Need Help?

 

If you have questions or concerns about any global AI reports, guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter