Under the NIST, the DOE and DOC Partner to Strengthen AI Safety and Trustworthiness With New Memorandum of Understanding

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/13/2024
In News

The U.S. Department of Energy (DOE) and the Department of Commerce (DOC), represented by the National Institute of Standards and Technology (NIST), have formalized a new Memorandum of Understanding (MOU) to jointly advance artificial intelligence (AI) research focused on safety, security, and trustworthiness. Signed by key leaders from each department, the MOU outlines a collaborative approach to address AI risks and develop resources that will support both scientific progress and national security.

 

Under this agreement, DOE and DOC will leverage their unique resources, including DOE’s National Laboratories and NIST’s AI expertise, through the U.S. AI Safety Institute and its consortium, Artificial Intelligence Safety Institute Consortium. These agencies will work together to assess AI models’ capabilities, risks, and impacts on fields such as national security, public safety, and the economy. They will also coordinate in creating testbeds, specialized environments that enable AI testing, and will focus on privacy-enhancing technologies to bolster AI systems’ data protection standards.

 

This collaboration was initiated following Executive Order 14110, which mandates federal agencies to ensure safe and trustworthy AI development. This order directs DOC and DOE to collaborate on testbeds for AI testing and mandates DOC to set guidelines for differential privacy in AI, a technique that aims to safeguard individual data within large datasets.

 

The DOE brings extensive experience with high-performance computing and testing capabilities through its network of 17 National Laboratories, which address complex scientific challenges. These labs provide a critical resource, particularly with four of the ten fastest supercomputers globally, giving DOE the computational power necessary for large-scale AI research and evaluation. Meanwhile, NIST, as the government’s lead in scientific measurement and AI safety, will focus on establishing guidelines for AI risk management and overseeing the safety evaluation of advanced AI models.

 

Additionally, the DOE will facilitate the use of its high-performance and cloud-based testbed resources for these AI safety evaluations. In turn, NIST will lead in assessing AI models for broader impacts on society and coordinating with AI model developers for access to these models. The partnership will also enable DOE and NIST to conduct joint “red-teaming” exercises—simulations to test AI model security by exposing them to controlled threats and vulnerabilities.

 

As Principal Coordinators, Helena Fu from DOE and Elizabeth Kelly from DOC will oversee this collaboration, including regular meetings to align ongoing and future AI initiatives. The MOU, effective for five years, allows both departments to enhance U.S. leadership in AI safety and provides a structured approach to addressing the challenges and risks posed by advanced AI technologies.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter