U.S. Department of Commerce Unveils New AI Safety Guidelines and Tools to Enhance Trust and Security

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/29/2024
In News

UPDATE — AUGUST 2025: The U.S. Department of Commerce’s AI safety guidance and tools announced in July 2024—part of the Biden administration’s AI Executive Order implementation—remain important reference points, but the federal approach has shifted in 2025. Several Biden-era AI rules, including the AI Diffusion Rule, were rescinded in May 2025 in favor of promoting innovation and reducing regulatory burdens. The AI Safety Institute has since been rebranded as the Center for AI Standards and Innovation (CAISI), with a lighter regulatory focus while maintaining work on safety and security standards. The Department now places stronger emphasis on export controls and global compliance expectations, particularly for AI-related semiconductors. While the original NIST guidance, Dioptra adversarial testing tool, and AI RMF Generative AI Profile remain relevant for developers and organizations, they should now be viewed within the context of a more innovation-driven federal AI policy.

 

ORIGINAL NEWS STORY:

 

U.S. Department of Commerce Unveils New AI Safety Guidelines and Tools to Enhance Trust and Security

 

The U.S. Department of Commerce unveiled measures to improve the safety, security, and trustworthiness of artificial intelligence (AI) systems. This announcement, made 270 days after President Biden’s Executive Order on AI, included new guidance documents, software tools, and updates on patent subject matter eligibility. Together, these steps reflected the administration’s wide-ranging approach to AI regulation.

 

NIST Guidance and Drafts

 

The National Institute of Standards and Technology (NIST) released three final guidance documents in July 2024. These were first opened for public comment in April. At the same time, the U.S. AI Safety Institute published a draft guidance to address risks tied to dual-use foundation models. The draft guidance outlined seven approaches to prevent misuse. For example, it warned against using AI to develop biological weapons or conduct offensive cyber operations. Public comments on this draft remained open until September 9, 2024.

 

Tools for Adversarial Testing

 

NIST also introduced the Dioptra software package, created to test AI systems against adversarial attacks. This tool helps developers and users understand how attacks can degrade model performance. As a result, they gain insights into how to strengthen AI systems against vulnerabilities. Dioptra allows users to simulate attack scenarios and observe the impact. Therefore, it provides a practical way to improve AI robustness in real-world conditions.

 

Generative AI Risk Profile

 

In addition, NIST has introduced the AI RMF Generative AI Profile. This guide helps organizations identify and manage unique risks linked to generative AI. It outlines 12 specific risks, such as the potential to generate misinformation or aid in cybersecurity attacks. The profile also provides a detailed action plan for developers to mitigate these risks. It aligns directly with the broader goals of NIST’s AI Risk Management Framework.

 

The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication 800-218A) further complements these efforts by providing guidance on secure software development practices. This publication emphasizes the importance of safeguarding the data used to train AI systems, highlighting strategies to detect and prevent issues like data poisoning and bias.

 

In a broader international context, NIST’s “Plan for Global Engagement on AI Standards” seeks to foster global cooperation on AI-related standards. This initiative underscores the importance of a coordinated approach to AI regulation, promoting consensus and collaboration among international stakeholders. These comprehensive measures, including the involvement of the U.S. Patent and Trademark Office (USPTO) in updating guidance on patent subject matter eligibility, illustrate the administration’s commitment to a holistic and proactive AI strategy.

 

Leadership Perspective

 

U.S. Secretary of Commerce Gina Raimondo stressed the importance of these initiatives. “AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI,” she said. Raimondo emphasized that the administration is committed to equipping developers, deployers, and users with tools to safely harness AI’s potential while minimizing risks.

 

Need Help?

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.



Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter