U.S. AI Safety Institute Partners with Anthropic and OpenAI to Advance AI Safety Research

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/29/2024
In News

In a big step towards ensuring the safety and reliability of advanced artificial intelligence (AI) systems, the U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce has announced new agreements with leading AI developers Anthropic and OpenAI. These agreements, formalized through Memoranda of Understanding, establish a framework for collaborative research on AI safety, including the evaluation of capabilities and the mitigation of risks associated with advanced AI models.

 

The partnerships mark a major milestone in the U.S. AI Safety Institute’s efforts to advance the science of AI safety, a critical component of the broader mission to foster technological innovation while safeguarding public trust and security. Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of these collaborations, stating, “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

 

Under the terms of the agreements, the U.S. AI Safety Institute will gain early access to major new AI models from both Anthropic and OpenAI, allowing the Institute to conduct in-depth research and testing before and after these models are released to the public. This early access is crucial for identifying potential safety risks and developing methods to address them, ensuring that AI systems are deployed responsibly and ethically.

 

The collaborations will also involve the U.S. AI Safety Institute providing feedback to both companies on potential safety improvements to their models. This feedback process will be conducted in close collaboration with the U.K. AI Safety Institute, reflecting a growing international effort to standardize and enhance AI safety protocols across borders.

 

The agreements align with the goals outlined in the Biden-Harris administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which underscores the need for robust AI safety measures as the technology continues to evolve. The partnerships also build on the voluntary commitments made by leading AI developers to the administration, further solidifying the federal government’s role in guiding the responsible development of AI technologies.

 

Evaluations conducted under these new agreements will cover a range of risk areas, helping to ensure that AI technologies are not only powerful and efficient but also secure and trustworthy. The Institute’s research will play a pivotal role in shaping the future of AI, providing the data and insights needed to guide policy decisions and industry practices.

 

These agreements are just the beginning of what promises to be an ongoing effort to build a safe and secure AI ecosystem. As Kelly noted, “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.” The collaborative efforts between the U.S. AI Safety Institute, Anthropic, OpenAI, and international partners will be crucial in defining the standards and practices that will shape the future of AI development and deployment.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the U.S. and global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Photo by eranicle on depositphotos.com – Tashkent, Uzbekistan – 26 of October, 2023: Display of computer monitor showing main web page of Open AI company – ChatGPT developer. Illustrative editorial

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter