UPDATE — AUGUST 2025: This article remains accurate and reflects U.S. Secretary of Commerce Gina Raimondo’s announcement at the AI Seoul Summit in May 2024. She outlined the Biden administration’s strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI), emphasizing a science-driven, safety-first approach to global AI governance.
Led by NIST, AISI’s mission is to mitigate risks, set evaluation standards, and promote responsible innovation. Raimondo also announced a global scientific network of AI Safety Institutes, linking efforts in Canada, the UK, Japan, Singapore, and the EU. The first follow-up convening took place in San Francisco in late 2024, with further cooperation continuing into 2025, including the AI Action Summit in Paris.
The Biden administration positioned this plan as a framework for trust and human rights in AI. In contrast, the Trump administration rolled back several Biden-era safety measures in 2025, revoking the 2023 Executive Order on trustworthy AI and shifting toward deregulation and rapid AI expansion. Raimondo’s vision remains a marker of Biden’s priorities, standing apart from Trump’s deregulatory approach.
ORIGINAL NEWS STORY:
U.S. Secretary of Commerce Unveils AI Safety Goals at AI Seoul Summit
As the AI Seoul Summit kicks off, U.S. Secretary of Commerce Gina Raimondo announced a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI) and outlined ambitious plans to enhance AI safety globally. Raimondo emphasized the Biden administration’s commitment to responsible AI development, unveiling plans for a global network of AI Safety Institutes and a future convening in San Francisco.
Raimondo’s Vision
Raimondo described AISI’s mission to mitigate AI risks while supporting innovation. NIST, which launched AISI, will lead efforts to advance the science of AI safety and provide responsible guidance for AI development. “Recent advances in AI carry exciting, life-changing potential for our society,” Raimondo said. “But only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly.”
She stressed that safety fosters innovation, making cooperation with global allies essential. Raimondo underscored that democratic nations must write the “rules of the road” for AI to protect rights and trust. AISI’s goals include testing advanced models, developing evaluation guidelines, and coordinating research on risk mitigation. The institute will work closely with industry, civil society, and international partners to spread best practices.
Global Network for AI Safety
Raimondo also launched a global network for AI safety science. The initiative expands collaboration with institutes in the UK, Japan, Canada, Singapore, and the European AI Office. The network builds on commitments made in the Seoul Statement of Intent. Its goal is to enhance cooperation and promote AI systems that are safe, secure, and trustworthy. Raimondo said the network will usher in a new phase of global coordination on AI safety science and governance.
Conclusion
To advance these goals, AISI established a Bay Area presence and plans to host global institutes and stakeholders later in the year. The location positions the institute to attract talent and foster innovation in one of the world’s leading AI hubs.
Need Help?
For those curious about how this and other global regulations could impact their company, reaching out to BABL AI is recommended. Hence, one of their audit experts will gladly provide assistance.

