U.S. Senator to Introduce VET AI Act to Establish Independent Verification for AI Companies

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/03/2024
In News

U.S. Senator John Hickenlooper, Chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, announced plans to introduce the Validation and Evaluation for Trustworthy (VET) AI Act. This proposed legislation aims to create a framework for independent verification of AI systems, ensuring they meet established safety and ethical standards. The bill directs the National Institute of Standards and Technology (NIST) to collaborate with federal agencies, industry stakeholders, academia, and civil society to develop detailed guidelines for the certification of third-party evaluators.

 

“AI is moving faster than any of us thought it would two years ago,” said Hickenlooper. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”

 

Currently, AI companies often make claims about their risk management practices, including how they train AI models, conduct safety tests, and manage risks, without any external verification. The VET AI Act would establish a pathway for independent evaluators to verify these claims. These evaluators, similar to auditors in the financial industry, would work as neutral third parties to ensure that AI companies’ practices align with established guardrails. This external assurance is expected to become increasingly important as Congress moves to establish AI regulations and benchmarks for the industry.

 

Hickenlooper’s proposal is rooted in his “Trust, but Verify Framework,” which he outlined in a speech at Silicon Flatirons in February. In that speech, he emphasized the need for auditing standards for AI to increase transparency, protect consumers, and promote the responsible adoption of AI technologies. Hickenlooper also called for federal privacy legislation to establish a national standard for protecting Americans’ privacy and data.

 

The VET AI Act would direct NIST, in coordination with the Department of Energy and the National Science Foundation, to develop voluntary specifications and guidelines for AI developers and deployers. These guidelines would cover internal assurance practices and collaborations with third parties for external assurance. Key considerations for these specifications include data privacy protections, mitigation of potential harms to individuals, dataset quality, and governance and communication processes throughout the AI systems’ development lifecycle.

 

The bill also proposes the establishment of a collaborative Advisory Committee. This committee would review and recommend criteria for individuals or organizations seeking certification to conduct internal or external assurance for AI systems. By creating a standardized certification process, the committee aims to ensure that evaluators have the necessary expertise and credibility.

 

Furthermore, the VET AI Act mandates that NIST conduct a comprehensive study of the AI assurance ecosystem. This study would examine current capabilities and methodologies used in AI assurance, identify necessary facilities or resources, and assess overall market demand for internal and external AI assurance services. The findings of this study are expected to inform the development of robust and effective assurance frameworks.

 

While the text of the bill has yet to be released, Hickenlooper’s initiative addresses the rapid pace of AI development and the urgent need for regulatory measures to ensure its safe and ethical deployment. By fostering independent verification and assurance, the VET AI Act aims to build public trust in AI technologies and protect consumers from potential risks associated with unverified AI systems.

 

 

Need Help?

If you’re wondering how Hickenlooper’s bill, or any other AI bill around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter