UPDATE — AUGUST 2025: The International Network of AI Safety Institutes formally launched at its inaugural meeting in San Francisco (Nov 20–21, 2024), chaired by the U.S. with participation from 10 other member nations plus the EU. Delegates agreed on three priority areas: synthetic content risks, foundation model testing, and advanced AI risk assessments. Ahead of the AI Action Summit in Paris (Feb 2025), the network secured $11M in global funding commitments, including $3.8M from the U.S. through USAID, to support international AI safety capacity building. Since then, members—including the EU AI Office—have begun joint testing exercises, policy alignment efforts, and information-sharing mechanisms. The network reconvened in Canada mid-2025 to expand collaboration, cementing its role as a central pillar in global AI safety governance.
ORIGINAL NEWS STORY:
U.S. to Host Inaugural International AI Safety Institutes Meeting
In a move to strengthen global cooperation on artificial intelligence (AI) safety, U.S. Secretary of Commerce Gina Raimondo and U.S. Secretary of State Antony Blinken announced that the United States will host the first-ever meeting of the International Network of AI Safety Institutes. The event will take place in San Francisco on November 20–21, 2024, bringing together technical experts from around the world.
Strengthening Global AI Collaboration
The initiative follows Raimondo’s announcement at the AI Seoul Summit in May 2024 and reflects the Biden-Harris Administration’s growing focus on global AI governance. The two-day event will align international efforts on safety and promote knowledge-sharing to ensure that AI technologies are developed responsibly and securely. “AI is the defining technology of our generation. With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever,” said Raimondo. “That includes close, thoughtful coordination with our allies and like-minded partners… We want the rules of the road on AI to be underpinned by safety, security, and trust.”
Member Nations and Shared Goals
The International Network of AI Safety Institutes includes Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the UK, and the US. Each member nation has committed to advancing AI safety and building international standards that promote fairness and shared responsibility. The San Francisco convening will be the first opportunity for experts from each country to collaborate directly. Representatives from government-backed research centers and safety institutes will set a coordinated agenda.
Preparing for the AI Action Summit
The meeting will also lay the groundwork for the AI Action Summit, scheduled for February 2025 in Paris. Participants plan to address several critical topics, including foundation model testing, transparency requirements, and methods for mitigating advanced AI risks. Secretary Blinken highlighted the importance of international cooperation. “Strengthening global collaboration on AI safety is essential to harness AI technology to solve the world’s greatest challenges. The AI Safety Network stands as a cornerstone of this effort,” he said.
Inclusive Participation and Broader Impact
Beyond government participation, the event will include experts from civil society, academia, and industry. These stakeholders will help shape discussions, offering insights into the ethical, technical, and societal dimensions of AI development. Their participation aims to ensure that AI governance remains transparent and inclusive. The meeting represents a major step toward a unified international framework for AI safety. In conclusion, it ensures that innovation proceeds alongside accountability and trust.
Need Help?
You might be concerned or have questions about how to navigate the U.S. or global AI regulatory landscape. Therefore, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.