U.S. AI Safety Institute Partners with Anthropic and OpenAI to Advance AI Safety Research

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/29/2024
In News

UPDATE — AUGUST 2025: The U.S. Artificial Intelligence Safety Institute, now rebranded as the Center for AI Standards and Innovation (CAISI), has reaffirmed its partnerships with Anthropic and OpenAI through new Memoranda of Understanding. These agreements grant government researchers early access to cutting-edge AI models for testing and evaluation. The rebrand, announced in June 2025, marks a strategic shift in focus. CAISI’s mission now extends beyond “safety” to emphasize U.S. leadership in AI standards, national security, and innovation. The Center continues to study AI risks, including threats related to cybersecurity and biosecurity, while strengthening global cooperation on responsible AI development.

 

ORIGINAL NEWS STORY:

 

U.S. AI Safety Institute Partners with Anthropic and OpenAI to Advance AI Safety Research

 

The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST) under the Department of Commerce, has announced new partnerships with Anthropic and OpenAI. The collaborations, formalized through Memoranda of Understanding, create a framework for joint research on AI safety—including risk evaluation, model testing, and mitigation strategies for advanced AI systems.

 

The partnerships mark a major milestone in the U.S. AI Safety Institute’s efforts to advance the science of AI safety, a critical component of the broader mission to foster technological innovation while safeguarding public trust and security. Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of these collaborations, stating, “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

 

Advancing Safe and Responsible AI Development

Under the new agreements, the U.S. AI Safety Institute will receive early access to major AI models from Anthropic and OpenAI. This access allows federal researchers to test capabilities and identify safety risks before the models reach the public. Early-stage evaluation helps ensure that AI systems are deployed responsibly and that potential harms are addressed in advance. The Institute will also share feedback with both companies on safety improvements and performance assessments. These reviews will occur in close coordination with the U.K. AI Safety Institute, demonstrating an emerging international effort to harmonize global standards for AI oversight.

Aligning With Federal AI Policy

 

The collaborations support the goals outlined in the Biden-Harris administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. That order emphasizes the importance of AI safety research, robust governance, and public accountability. They also build on voluntary safety commitments made by leading AI developers to the White House, signaling continued cooperation between industry and government to promote trustworthy AI.

 

Evaluating Risk and Setting Standards

 

Research under the agreements will cover a range of risk domains, from model misuse and bias to national security implications. The Institute’s evaluations will provide essential data to shape future policies, technical standards, and best practices for responsible AI deployment. Director Kelly called the agreements “an important milestone” and stressed that this is only the beginning of a long-term commitment to responsible AI stewardship. Together, the U.S. Institute, industry partners, and international allies are laying the groundwork for a secure and ethical global AI ecosystem.

 

 

Need Help?

 

If you have questions about the U.S. AI Safety Institute’s initiatives or how global AI regulations could affect your organization, contact BABL AI. Their Audit Experts can help you assess risks, build compliance frameworks, and strengthen responsible AI governance.

 

Photo by eranicle on depositphotos.com – Tashkent, Uzbekistan – 26 of October, 2023: Display of computer monitor showing main web page of Open AI company – ChatGPT developer. Illustrative editorial

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter