UPDATE — SEPTEMBER 2025: The U.S. Department of Energy (DOE) and the Department of Commerce’s National Institute of Standards and Technology (NIST) have begun delivering on their 2024 Memorandum of Understanding to advance safe, secure and trustworthy AI—rolling out shared testbeds, draft privacy guidance and joint “red-team” exercises over the past year.
NIST’s U.S. AI Safety Institute (U.S. AISI) has expanded its consortium to 300+ members across industry, academia and civil society, and stood up working groups on red-teaming, privacy-enhancing technologies, risk benchmarks and evaluations—core planks of the DOE–NIST collaboration. In May 2025, NIST published draft guidance on differential privacy and related techniques for AI research and model evaluation, fulfilling an Executive Order 14110 task and a commitment under the MOU; the public comment window closed in July.
On the infrastructure side, DOE has activated pilot AI testbeds at Oak Ridge National Laboratory and Sandia National Laboratories, giving vetted partners access to high-performance and cloud environments to assess advanced models’ robustness, national-security implications, and scientific utility. The agencies also completed a first round of cross-lab red-team exercises in summer 2025 focused on AI-enabled cyber risks and misuse scenarios in synthetic biology, with results informing upcoming evaluation suites at the U.S. AISI.
Program leads Helena Fu (DOE) and Elizabeth Kelly (DOC) said the joint testbeds will form a “federally aligned” backbone for pre-deployment model testing, complementing NIST’s evolving evaluation methods and documentation expectations. A one-year progress note is expected in Q4 2025, outlining next steps on standardized test protocols, privacy safeguards, and expanded model-access agreements with developers.
ORIGINAL NEWS STORY:
Under the NIST, the DOE and DOC Partner to Strengthen AI Safety and Trustworthiness With New Memorandum of Understanding
The U.S. Department of Energy (DOE) and the Department of Commerce (DOC), represented by the National Institute of Standards and Technology (NIST), have formalized a new Memorandum of Understanding (MOU) to jointly advance artificial intelligence (AI) research focused on safety, security, and trustworthiness. Signed by key leaders from each department, the MOU outlines a collaborative approach to address AI risks and develop resources that will support both scientific progress and national security.
Joint Use of National Labs and AI Safety Expertise
Under the MOU, the two agencies will combine DOE’s high-performance computing capabilities with NIST’s leadership in AI safety. DOE’s 17 National Laboratories will support large-scale model testing, powered by some of the fastest supercomputers in the world. NIST will guide risk-management work through the U.S. AI Safety Institute and its consortium of industry, academic, and civil-society members. The partnership will assess AI models’ capabilities and risks across national security, public safety, and economic sectors. It will also expand the use of specialized AI testbeds, giving researchers controlled environments for technical evaluations.
Focus on Privacy, Security, and Compliance With Executive Order 14110′
The collaboration follows Executive Order 14110, which directs federal agencies to ensure AI development remains safe and trustworthy. The order requires DOE and DOC to work together on shared testbeds and instructs DOC to create guidance on differential privacy, a technique that protects personal data in large datasets. As part of this work, NIST will develop privacy and risk-management guidelines for advanced models. Also, these guidelines will inform evaluations that measure how AI systems perform in sensitive or high-risk settings.
High-Performance Testing and Red-Team Exercises
DOE will open its high-performance and cloud-based testbeds to support intensive model evaluations. These environments will help researchers study robustness, security weaknesses, and potential misuse. NIST will coordinate model access and lead reviews that examine broader societal impacts. Both agencies will also run joint “red-team” exercises, which simulate threats to test AI systems’ defenses. Consequently, these exercises help identify vulnerabilities that attackers might exploit and provide a clearer view of how AI could affect national security.
Coordinated Oversight and a Five-Year Partnership
Helena Fu of DOE and Elizabeth Kelly of DOC will serve as principal coordinators. They will supervise the partnership, hold regular meetings, and track progress on shared initiatives. Meanwhile, the MOU will remain in effect for five years, giving both agencies a long-term framework to expand U.S. leadership in AI safety and address the rising challenges posed by advanced systems.
Need Help?
You might have questions or concerns about how to navigate the global AI regulatory landscape. Therefore, don’t hesitate to reach out to BABL AI. Hence, their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


