The U.S. Department of Commerce and the U.S. Department of State jointly launched the International Network of AI Safety Institutes (INASI) during its inaugural convening on November 20, 2024. The initiative aims to advance global cooperation on the safe development and application of artificial intelligence (AI), addressing risks while fostering innovation.
The two-day event brought together government representatives, industry leaders, academic experts, and civil society to establish a foundation for collaboration on AI safety. The Network’s mission is to build a shared scientific understanding of AI safety risks and develop best practices for testing and evaluation. This marks a significant step toward ensuring AI technologies benefit societies worldwide while minimizing harm. Here were the key developments that were announced:
- Mission Statement and Priority Areas
The Network adopted a joint mission statement emphasizing the need for cultural and linguistic diversity in addressing AI safety challenges. It identified four core priorities: conducting AI safety research, developing best practices for model testing, creating a unified approach to risk assessments, and promoting global inclusion in AI development.
- Funding for Synthetic Content Research
With $11 million committed from governments and philanthropic organizations, the Network will focus on mitigating risks associated with synthetic content—AI-generated materials that can be used for fraud, impersonation, and more. Highlights include:
-
- USAID Contribution: $3.8 million for building AI safety capacity overseas.
-
- Australia’s CSIRO: $1.42 million annually for synthetic content research.
-
- Korea’s Commitment: $7.2 million over four years for safeguarding AI applications.
-
- Private Sector Funding: Contributions from the Knight Foundation, AI Safety Fund, and others to support interdisciplinary approaches to addressing these challenges.
- Multilateral Testing Insights
The Network conducted its first joint testing exercise, evaluating Meta’s Llama 3.1 AI model. The study focused on multilingual capabilities, hallucinations in specific domains, and general academic knowledge. Insights from this pilot will inform broader evaluations ahead of the AI Action Summit in Paris in February 2025.
- Risk Assessment Framework
The Network issued a joint statement proposing a six-pillar framework for assessing risks associated with advanced AI systems. This includes transparency, actionability, and reproducibility. It builds on international agreements like the Bletchley Declaration and Seoul Statement of Intent, aiming to align global AI safety practices.
- TRAINS Taskforce
The U.S. announced the Testing Risks of AI for National Security (TRAINS) Taskforce, a collaborative effort involving federal agencies, including the Department of Defense and the NSA. The initiative will focus on safeguarding national security while advancing AI innovation in critical domains like cybersecurity and infrastructure.
The convening also set the stage for further international cooperation at the upcoming AI Action Summit in France. As the inaugural chair of INASI, the United States seeks to create a cohesive global framework for AI safety that encourages both innovation and responsible governance.
Need Help?
If you have questions or concerns about any global AI reports, guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.