UPDATE — SEPTEMBER 2025: The U.S. Department of Energy (DOE) and the Department of Commerce’s National Institute of Standards and Technology (NIST) have begun delivering on their 2024 Memorandum of Understanding to advance safe, secure and trustworthy AI—rolling out shared testbeds, draft privacy guidance and joint “red-team” exercises over the past year.
NIST’s U.S. AI Safety Institute (U.S. AISI) has expanded its consortium to 300+ members across industry, academia and civil society, and stood up working groups on red-teaming, privacy-enhancing technologies, risk benchmarks and evaluations—core planks of the DOE–NIST collaboration. In May 2025, NIST published draft guidance on differential privacy and related techniques for AI research and model evaluation, fulfilling an Executive Order 14110 task and a commitment under the MOU; the public comment window closed in July.
On the infrastructure side, DOE has activated pilot AI testbeds at Oak Ridge National Laboratory and Sandia National Laboratories, giving vetted partners access to high-performance and cloud environments to assess advanced models’ robustness, national-security implications, and scientific utility. The agencies also completed a first round of cross-lab red-team exercises in summer 2025 focused on AI-enabled cyber risks and misuse scenarios in synthetic biology, with results informing upcoming evaluation suites at the U.S. AISI.
Program leads Helena Fu (DOE) and Elizabeth Kelly (DOC) said the joint testbeds will form a “federally aligned” backbone for pre-deployment model testing, complementing NIST’s evolving evaluation methods and documentation expectations. A one-year progress note is expected in Q4 2025, outlining next steps on standardized test protocols, privacy safeguards, and expanded model-access agreements with developers.
ORIGINAL NEWS STORY:
Under the NIST, the DOE and DOC Partner to Strengthen AI Safety and Trustworthiness With New Memorandum of Understanding
The U.S. Department of Energy (DOE) and the Department of Commerce (DOC), represented by the National Institute of Standards and Technology (NIST), have formalized a new Memorandum of Understanding (MOU) to jointly advance artificial intelligence (AI) research focused on safety, security, and trustworthiness. Signed by key leaders from each department, the MOU outlines a collaborative approach to address AI risks and develop resources that will support both scientific progress and national security.
Under this agreement, DOE and DOC will leverage their unique resources, including DOE’s National Laboratories and NIST’s AI expertise, through the U.S. AI Safety Institute and its consortium, Artificial Intelligence Safety Institute Consortium. These agencies will work together to assess AI models’ capabilities, risks, and impacts on fields such as national security, public safety, and the economy. They will also coordinate in creating testbeds, specialized environments that enable AI testing, and will focus on privacy-enhancing technologies to bolster AI systems’ data protection standards.
This collaboration was initiated following Executive Order 14110, which mandates federal agencies to ensure safe and trustworthy AI development. This order directs DOC and DOE to collaborate on testbeds for AI testing and mandates DOC to set guidelines for differential privacy in AI, a technique that aims to safeguard individual data within large datasets.
The DOE brings extensive experience with high-performance computing and testing capabilities through its network of 17 National Laboratories, which address complex scientific challenges. These labs provide a critical resource, particularly with four of the ten fastest supercomputers globally, giving DOE the computational power necessary for large-scale AI research and evaluation. Meanwhile, NIST, as the government’s lead in scientific measurement and AI safety, will focus on establishing guidelines for AI risk management and overseeing the safety evaluation of advanced AI models.
Additionally, the DOE will facilitate the use of its high-performance and cloud-based testbed resources for these AI safety evaluations. In turn, NIST will lead in assessing AI models for broader impacts on society and coordinating with AI model developers for access to these models. The partnership will also enable DOE and NIST to conduct joint “red-teaming” exercises—simulations to test AI model security by exposing them to controlled threats and vulnerabilities.
As Principal Coordinators, Helena Fu from DOE and Elizabeth Kelly from DOC will oversee this collaboration, including regular meetings to align ongoing and future AI initiatives. The MOU, effective for five years, allows both departments to enhance U.S. leadership in AI safety and provides a structured approach to addressing the challenges and risks posed by advanced AI technologies.
Need Help?
If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.