ORIGINAL NEWS STORY:
Global Collaboration on AI Safety Launched with Inaugural Convening in San Francisco
The U.S. Department of Commerce and the U.S. Department of State launched the International Network of AI Safety Institutes (INASI) during its inaugural convening on November 20, 2024. The initiative aims to strengthen global cooperation on the safe development and use of artificial intelligence. By bringing nations together, INASI seeks to reduce risks while supporting responsible innovation.
Building a Foundation for Shared AI Safety Work
The two-day gathering brought government officials, industry leaders, academic researchers, and civil society groups into one room. Together, they laid the groundwork for long-term, international collaboration. The Network’s mission centers on building a shared scientific understanding of AI safety risks and creating best practices for testing and evaluating advanced models. This coordinated approach is intended to ensure AI technologies benefit societies while minimizing harm.
-
Mission Statement and Priority Areas
During the event, members adopted a joint mission statement that stressed the importance of cultural and linguistic diversity in AI safety work. They also identified four priority areas: advancing AI safety research, developing best practices for model testing, creating a unified approach to risk assessments, and ensuring global inclusion in AI development and oversight.
-
Funding for Synthetic Content Research
INASI also announced funding commitments totaling $11 million from governments and philanthropic partners. These resources will support research on synthetic content—AI-generated material that can enable impersonation, fraud, and other misuse. Key contributions include:
-
- USAID Contribution: $3.8 million for building AI safety capacity overseas.
-
- Australia’s CSIRO: $1.42 million annually for synthetic content research.
-
- Korea’s Commitment: $7.2 million over four years for safeguarding AI applications.
-
- Private Sector Funding: Contributions from the Knight Foundation, AI Safety Fund, and others to support interdisciplinary approaches to addressing these challenges.
-
Multilateral Testing Insights
Another highlight of the convening was the Network’s first joint testing effort, which evaluated Meta’s Llama 3.1 model. The assessment focused on multilingual performance, hallucination behavior, and general academic knowledge. The findings will inform broader evaluations ahead of the AI Action Summit in Paris in February 2025.
-
Risk Assessment Framework
The Network issued a joint statement proposing a six-pillar framework for assessing risks associated with advanced AI systems. This includes transparency, actionability, and reproducibility. It builds on international agreements like the Bletchley Declaration and Seoul Statement of Intent, aiming to align global AI safety practices.
-
TRAINS Taskforce
The U.S. announced the Testing Risks of AI for National Security (TRAINS) Taskforce, a collaborative effort involving federal agencies, including the Department of Defense and the NSA. The initiative will focus on safeguarding national security while advancing AI innovation in critical domains like cybersecurity and infrastructure.
The convening also set the stage for further international cooperation at the upcoming AI Action Summit in France. As the inaugural chair of INASI, the United States seeks to create a cohesive global framework for AI safety that encourages both innovation and responsible governance.
Need Help?
If you have questions or concerns about any global AI reports, guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


