Global Collaboration on AI Safety Launched with Inaugural Convening in San Francisco

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/21/2024
In News

UPDATE — SEPTEMBER 2025: Since the U.S. Department of Commerce and State Department launched the International Network of AI Safety Institutes (INASI) in November 2024, the initiative has grown into a central hub for multilateral AI safety coordination. Its first major milestone came during the Paris AI Action Summit in February 2025, where INASI expanded membership to more than 25 government-affiliated institutes, including Canada, Japan, and the EU. At the summit, members presented early findings from their joint evaluation of Meta’s Llama 3.1 model and agreed to harmonize testing protocols.

Through spring and summer 2025, INASI also secured new financial backing. The UK pledged £10 million for collaborative research, Japan committed roughly ¥1.5 billion for multilingual evaluation standards, and philanthropic organizations like Open Philanthropy added funding to support synthetic content detection. These contributions built on the initial $11 million pledged at launch and broadened the scope of INASI’s research efforts.

The risk assessment framework introduced in late 2024 was refined in mid-2025 to align with both the OECD’s AI risk classification system and the EU AI Act’s taxonomy. A crosswalk document was circulated to reduce duplication and foster interoperability between regional safety regimes. Meanwhile, the TRAINS Taskforce, led by U.S. defense agencies, carried out its first red-team exercises on advanced language models, with public summaries released in July noting vulnerabilities in cyber defense contexts and strategies for resilience.

Another milestone was the release in August 2025 of an open multilingual benchmark suite, covering more than 20 languages, with special emphasis on low-resource linguistic contexts. This dataset, designed collaboratively across INASI members, was framed as foundational for ensuring cultural and linguistic diversity in global AI safety evaluations.

 

ORIGINAL NEWS STORY:

 

Global Collaboration on AI Safety Launched with Inaugural Convening in San Francisco

 

The U.S. Department of Commerce and the U.S. Department of State jointly launched the International Network of AI Safety Institutes (INASI) during its inaugural convening on November 20, 2024. The initiative aims to advance global cooperation on the safe development and application of artificial intelligence (AI), addressing risks while fostering innovation.

 

The two-day event brought together government representatives, industry leaders, academic experts, and civil society to establish a foundation for collaboration on AI safety. The Network’s mission is to build a shared scientific understanding of AI safety risks and develop best practices for testing and evaluation. This marks a significant step toward ensuring AI technologies benefit societies worldwide while minimizing harm. Here were the key developments that were announced:

 

  1. Mission Statement and Priority Areas

 

The Network adopted a joint mission statement emphasizing the need for cultural and linguistic diversity in addressing AI safety challenges. It identified four core priorities: conducting AI safety research, developing best practices for model testing, creating a unified approach to risk assessments, and promoting global inclusion in AI development.

 

  1. Funding for Synthetic Content Research

 

With $11 million committed from governments and philanthropic organizations, the Network will focus on mitigating risks associated with synthetic content—AI-generated materials that can be used for fraud, impersonation, and more. Highlights include:

 

    • Australia’s CSIRO: $1.42 million annually for synthetic content research.
    • Private Sector Funding: Contributions from the Knight Foundation, AI Safety Fund, and others to support interdisciplinary approaches to addressing these challenges.

 

  1. Multilateral Testing Insights

 

The Network conducted its first joint testing exercise, evaluating Meta’s Llama 3.1 AI model. The study focused on multilingual capabilities, hallucinations in specific domains, and general academic knowledge. Insights from this pilot will inform broader evaluations ahead of the AI Action Summit in Paris in February 2025.

 

  1. Risk Assessment Framework

 

The Network issued a joint statement proposing a six-pillar framework for assessing risks associated with advanced AI systems. This includes transparency, actionability, and reproducibility. It builds on international agreements like the Bletchley Declaration and Seoul Statement of Intent, aiming to align global AI safety practices.

 

  1. TRAINS Taskforce

 

The U.S. announced the Testing Risks of AI for National Security (TRAINS) Taskforce, a collaborative effort involving federal agencies, including the Department of Defense and the NSA. The initiative will focus on safeguarding national security while advancing AI innovation in critical domains like cybersecurity and infrastructure.

 

The convening also set the stage for further international cooperation at the upcoming AI Action Summit in France. As the inaugural chair of INASI, the United States seeks to create a cohesive global framework for AI safety that encourages both innovation and responsible governance.

 

 

Need Help?

 

If you have questions or concerns about any global AI reports, guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter