U.S. Secretary of Commerce Unveils AI Safety Goals at AI Seoul Summit

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/22/2024
In News

UPDATE – FEBRUARY 2026:

Since the August 2025 update, the international AI safety ecosystem shaped by the U.S. Artificial Intelligence Safety Institute (AISI) has continued to evolve. Meanwhile, U.S. domestic policy shifted toward a more deregulatory approach under the Trump administration.

One of the most significant developments came in December 2025, when the original International Network of AI Safety Institutes expanded and rebranded as the International Network for Advanced AI Measurement, Evaluation and Science. The updated structure places stronger emphasis on scientific measurement, benchmarking, and evaluation of advanced AI systems rather than broader policy alignment. Membership has expanded beyond the original group to include additional national and regional partners. This reflects continued global demand for coordinated technical standards.

The network’s work has continued through collaborative research and reporting. In October 2025, participants released a key update focused on rapidly advancing AI capabilities and associated risks, highlighting challenges in model evaluation, risk forecasting, and international coordination. Building on this work, a second International AI Safety Report was published in February 2026. It reviewed emerging technical risks and outlined priorities for shared evaluation methodologies and safety science collaboration.

These developments show that, despite shifts in U.S. federal policy after the revocation of Biden-era AI executive actions, the scientific collaboration initiated under Secretary Gina Raimondo’s vision has continued internationally. The global network increasingly operates as a technical and research-driven coalition. It advances evaluation frameworks and safety measurement independent of domestic political changes.

At the same time, the contrast between administrations remains clear. The Biden-era strategy emphasized global governance, risk mitigation, and human-rights-centered AI oversight. In contrast, the Trump administration has prioritized innovation speed, deregulation, and competitive AI expansion. As a result, AISI’s original mission now functions more as a scientific and international coordination platform. It is no longer the central pillar of U.S. AI policy.

Overall, the initiative announced at the AI Seoul Summit remains historically important as the foundation for today’s international AI safety collaboration. However, the focus has shifted from policy signaling toward technical evaluation, measurement science, and cross-border research coordination. These areas are likely to shape how advanced AI systems are assessed globally moving forward.

ORIGINAL NEWS STORY:

U.S. Secretary of Commerce Unveils AI Safety Goals at AI Seoul Summit

As the AI Seoul Summit kicks off, U.S. Secretary of Commerce Gina Raimondo announced a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI) and outlined ambitious plans to enhance AI safety globally. Raimondo emphasized the Biden administration’s commitment to responsible AI development. She unveiled plans for a global network of AI Safety Institutes and a future convening in San Francisco.

 

Raimondo’s Vision

 

Raimondo described AISI’s mission to mitigate AI risks while supporting innovation. NIST, which launched AISI, will lead efforts to advance the science of AI safety and provide responsible guidance for AI development. “Recent advances in AI carry exciting, life-changing potential for our society,” Raimondo said. “But only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly.”

She stressed that safety fosters innovation, making cooperation with global allies essential. Raimondo underscored that democratic nations must write the “rules of the road” for AI to protect rights and trust. AISI’s goals include testing advanced models, developing evaluation guidelines, and coordinating research on risk mitigation. The institute will work closely with industry, civil society, and international partners to spread best practices.

 

Global Network for AI Safety

 

Raimondo also launched a global network for AI safety science. The initiative expands collaboration with institutes in the UK, Japan, Canada, Singapore, and the European AI Office. The network builds on commitments made in the Seoul Statement of Intent. Its goal is to enhance cooperation and promote AI systems that are safe, secure, and trustworthy. As a result, Raimondo said the network will usher in a new phase of global coordination on AI safety science and governance.

 

Conclusion

 

To advance these goals, AISI established a Bay Area presence and plans to host global institutes and stakeholders later in the year. The location positions the institute to attract talent and foster innovation in one of the world’s leading AI hubs.

 

Need Help?

 

For those curious about how this and other global regulations could impact their company, reaching out to BABL AI is recommended. Hence, one of their audit experts will gladly provide assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter