U.S. House Introduces AI Incident Reporting and Security Enhancement Act, Tasking NIST with Updating AI Vulnerability Guidelines

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/25/2024
In News

UPDATE — AUGUST 2025: Since its introduction in April 2024, the AI Incident Reporting and Security Enhancement Act has not become law. But much of its substance has carried forward into U.S. AI governance in 2025. The bill was referred to the House Science, Space, and Technology Committee. However, it did not advance before the 118th Congress ended in January 2025. It has not been reintroduced under the same number in the 119th Congress. However, its key provisions are being implemented through agency action and related legislative efforts.

NIST has already begun expanding the National Vulnerability Database (NVD) to include AI- and machine learning–specific vulnerabilities such as adversarial attacks, data poisoning, and model inversion. In partnership with the Cybersecurity and Infrastructure Security Agency (CISA), NIST launched a working group on AI vulnerabilities and is developing taxonomies and severity ratings similar to CVE scoring in traditional software. Early in 2025, CISA also piloted a voluntary AI incident reporting portal, where companies, researchers, and government agencies can share information about AI security and safety incidents. These reports, anonymized and aggregated, are helping shape risk assessment practices and policy discussions.

ORIGINAL NEWS POST:

U.S. House Introduces AI Incident Reporting and Security Enhancement Act, Tasking NIST with Updating AI Vulnerability Guidelines

 

The U.S. House of Representatives has introduced the AI Incident Reporting and Security Enhancement Act. The bill is designed to strengthen the federal government’s ability to manage artificial intelligence (AI)-related security and safety risks. The legislation calls for updated guidelines to address AI vulnerabilities and for the creation of voluntary mechanisms to report significant AI security incidents.

 

Strengthening AI Vulnerability Management

 

Sponsored by members of the 118th Congress, the bill directs the National Institute of Standards and Technology (NIST) to modernize the National Vulnerability Database (NVD) to include AI-specific risks. Lawmakers emphasize that AI presents distinct security challenges compared to traditional software systems and that current frameworks do not adequately account for those differences. At the core of the legislation is the recognition that AI models can fail in unpredictable ways, creating vulnerabilities that differ from standard coding flaws. NIST would be tasked with defining these vulnerabilities, identifying their root causes, and updating the NVD to better capture and categorize AI-related security threats.

 

Establishing Standards and Guidelines

 

The bill also requires NIST to develop standards and procedures for managing AI vulnerabilities. These would include monitoring, testing, and mitigation processes to ensure that AI systems remain secure throughout their lifecycle. The goal is to help government agencies and private companies prevent potential breaches or safety failures before they cause harm. By updating existing security frameworks, the legislation aims to ensure that AI-related weaknesses are documented, tracked, and addressed promptly. This proactive approach reflects a growing acknowledgment that AI technologies—used in everything from finance to defense—require constant oversight to remain trustworthy and secure.

 

Creating a Voluntary AI Incident Reporting System

 

A major feature of the bill is the creation of a voluntary reporting program for AI-related security and safety incidents. Under this system, NIST would collaborate with the Cybersecurity and Infrastructure Security Agency (CISA) and other federal partners to develop standardized reporting guidelines. The reporting system would distinguish between AI security incidents—those involving malicious exploitation—and AI safety incidents, such as system malfunctions or unintended outputs that cause harm. This classification is intended to improve coordination and response efforts by tailoring mitigation strategies to the nature of each incident.

 

Enhancing Collaboration Across Sectors

 

The bill encourages participation from academia, industry, and government, aiming to foster a multi-stakeholder approach to AI risk management. Shared data from incident reports would be used to build a clearer picture of the threats AI systems pose. It would also show how to mitigate them effectively. Within three years of enactment, NIST would deliver a report to Congress. The report would summarize its findings and recommending whether the voluntary system should evolve into a permanent framework. That report would also evaluate the cost-effectiveness and benefits of a nationwide AI incident reporting infrastructure.

 

Laying the Groundwork for Safer AI

 

The AI Incident Reporting and Security Enhancement Act represents a crucial step toward comprehensive AI risk management. Therefore, this legislation underscores the growing need for transparency, accountability, and security in how intelligent systems are designed and deployed.

 

 

Need Help?

 

You might have questions or concerns about this U.S. bill or any other global AI bills, regulations or reports. Hence, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter