U.S. Senator Introduces VET AI Act to Establish Independent Verification Framework for AI Systems

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

In an update to a story we brought you earlier in July, U.S. Senator John Hickenlooper, chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, has unveiled the Validation and Evaluation for Trustworthy (VET) AI Act. This new legislation aims to create a framework for the independent verification of AI systems, ensuring they meet established safety and ethical standards. The bill, co-sponsored by Senator Lisa Murkowski, directs the National Institute of Standards and Technology (NIST) to collaborate with federal agencies, industry stakeholders, academia, and civil society to develop detailed guidelines for the certification of third-party evaluators.

 

Currently, AI companies often make claims about their risk management practices, such as how they train AI models and conduct safety tests, without any external verification. The VET AI Act seeks to establish a pathway for independent evaluators to verify these claims. These evaluators, functioning similarly to auditors in the financial industry, would work as neutral third parties to ensure that AI companies’ practices align with established guardrails. This external assurance is expected to become increasingly important as Congress moves to establish AI regulations and benchmarks for the industry.

 

Under the VET AI Act, NIST, in coordination with the Department of Energy and the National Science Foundation, is tasked with developing voluntary specifications and guidelines for AI developers and deployers. These guidelines would address critical issues such as data privacy protections, mitigation of potential harms to individuals, dataset quality, and governance and communication processes throughout the AI systems’ development lifecycle.

 

Additionally, the bill proposes the establishment of a collaborative Advisory Committee. This committee would review and recommend criteria for individuals or organizations seeking certification to conduct internal or external assurance for AI systems. The goal is to ensure that evaluators have the necessary expertise and credibility to assess AI systems accurately and fairly.

 

The VET AI Act also mandates a comprehensive study by NIST to evaluate the current capabilities and methodologies used in AI assurance. This study will help identify necessary facilities or resources and assess overall market demand for internal and external AI assurance services. The findings are expected to inform the development of robust and effective assurance frameworks, ensuring that AI systems are safe, ethical, and trustworthy.

 

The proposed legislation marks a significant step toward establishing a standardized framework for AI system validation and evaluation, providing much-needed oversight in a rapidly evolving field. As AI continues to integrate into various sectors of society, such measures are crucial for ensuring public trust and the responsible use of technology.

 

 

Need Help?



If you’re wondering how the VET AI Act, and other AI regulations around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter