U.S. Department of Commerce Unveils New AI Safety Guidelines and Tools to Enhance Trust and Security

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/29/2024
In News

The U.S. Department of Commerce has unveiled a series of new measures aimed at enhancing the safety, security, and trustworthiness of artificial intelligence (AI) systems. This announcement, marking 270 days since President Biden’s Executive Order on AI, includes the release of guidance documents, software tools, and updates on patent subject matter eligibility, reflecting the administration’s comprehensive approach to AI regulation.

 

The National Institute of Standards and Technology (NIST) has released three final guidance documents, initially open for public comment in April, alongside a new draft guidance from the U.S. AI Safety Institute. These documents are designed to help AI developers navigate the complexities of generative AI and dual-use foundation models, which can be used for both beneficial and harmful purposes. Additionally, a software package has been introduced to measure how adversarial attacks can degrade the performance of AI systems, providing a critical tool for developers and users to assess the robustness of their AI technologies.

 

U.S. Secretary of Commerce Gina Raimondo emphasized the importance of these initiatives in maintaining the United States’ leadership in AI. “AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI,” said Raimondo. “Today’s announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI while minimizing its associated risks.”

 

NIST’s document releases cover various aspects of AI technology. The U.S. AI Safety Institute’s draft guidance focuses on managing misuse risks for dual-use foundation models, proposing seven key approaches to mitigate potential harms. These include preventing the use of AI for activities like developing biological weapons or conducting offensive cyber operations. Public comments on this draft guidance are being accepted until September 9, 2024.

 

Another significant release is the Dioptra software package, designed to test AI systems against adversarial attacks. This tool aims to help developers and users understand how such attacks can impact the performance of AI models, addressing a critical aspect of AI security. Dioptra allows users to simulate various attack scenarios, providing insights into how AI systems can be strengthened against potential vulnerabilities.

 

In addition to these resources, NIST has introduced the AI RMF Generative AI Profile, a guide to help organizations identify and manage the unique risks associated with generative AI. This document outlines 12 specific risks, including the potential for generating misinformation and facilitating cybersecurity attacks. It offers a detailed action plan for developers to mitigate these risks, aligning with the broader goals of NIST’s AI Risk Management Framework.

 

The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication 800-218A) further complements these efforts by providing guidance on secure software development practices. This publication emphasizes the importance of safeguarding the data used to train AI systems, highlighting strategies to detect and prevent issues like data poisoning and bias.

 

In a broader international context, NIST’s “Plan for Global Engagement on AI Standards” seeks to foster global cooperation on AI-related standards. This initiative underscores the importance of a coordinated approach to AI regulation, promoting consensus and collaboration among international stakeholders. These comprehensive measures, including the involvement of the U.S. Patent and Trademark Office (USPTO) in updating guidance on patent subject matter eligibility, illustrate the administration’s commitment to a holistic and proactive AI strategy.

 

 

Need Help?

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.



Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter