The U.S. House of Representatives introduced the AI Incident Reporting and Security Enhancement Act. The bill aims to enhance the federal government’s response to artificial intelligence (AI)-related security and safety risks by requiring updated guidelines for managing AI vulnerabilities and implementing voluntary reporting mechanisms for significant AI security incidents.
The bill, sponsored by members of the 118th Congress, directs the Director of the National Institute of Standards and Technology (NIST) to update the National Vulnerability Database (NVD) to reflect AI security vulnerabilities. The act emphasizes the need for modernized processes and procedures for addressing AI-specific risks and ensuring that AI vulnerabilities are appropriately tracked and managed.
At the core of the legislation is the recognition that AI systems have distinct security vulnerabilities compared to traditional software. These vulnerabilities, often stemming from the complexity and unpredictability of AI, require tailored management approaches. As part of the bill, NIST is tasked with identifying and establishing common definitions and characteristics of AI vulnerabilities that may not be adequately addressed by the current NVD framework.
The legislation also mandates the development of standards and guidelines for the technical management of AI vulnerabilities. This includes creating processes to monitor, evaluate, and address weaknesses within AI systems, ensuring that these vulnerabilities are managed effectively and that any risks posed by AI systems are minimized.
In addition to managing vulnerabilities, the bill proposes a voluntary reporting system for significant AI security and safety incidents. NIST, in collaboration with the Cybersecurity and Infrastructure Security Agency (CISA) and other federal agencies, will convene a multi-stakeholder process to develop guidelines for tracking and reporting these incidents. This initiative aims to encourage industry, academia, nonprofit organizations, and government agencies to share information on substantial AI incidents, helping build a comprehensive understanding of the threats posed by AI systems.
A critical component of the reporting system is the differentiation between AI security incidents and AI safety incidents. The bill calls for the establishment of classifications and taxonomies to categorize incidents based on their characteristics and impacts. This differentiation is key to developing effective responses, as security incidents might involve intentional exploitation of vulnerabilities, while safety incidents could involve accidental failures with potentially serious consequences.
The bill requires NIST to submit a report to Congress within three years of the act’s passage, detailing the findings of the multi-stakeholder process and providing recommendations for establishing a permanent reporting system. The report will assess the usefulness and cost-effectiveness of voluntary tracking efforts, as well as the potential for broader implementation across sectors.
By focusing on both AI vulnerability management and incident reporting, the AI Incident Reporting and Security Enhancement Act aims to provide a framework for addressing the growing concerns around AI security and safety. As AI continues to evolve and permeate all aspects of society, this legislation represents a critical step toward ensuring that the technology is deployed responsibly and with due consideration for the risks it poses.
Need Help?
If you have questions or concerns about this U.S. bill or any other global AI bills, regulations or reports, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.