UPDATE — AUGUST 2025: The VET AI Act remains a current and credible legislative effort with bipartisan backing. Introduced by Senators John Hickenlooper and Shelley Moore Capito in mid-2024, the bill was advanced out of the Senate Commerce Committee before the close of the 118th Congress. It directs NIST to develop voluntary guidelines and establish certification standards for third-party evaluators to verify the safety, privacy, and ethical compliance of AI systems. the bill is still under active discussion and is widely supported by AI policy experts, industry leaders, and federal agencies as a practical step toward establishing external AI assurance—similar to financial audits.
ORIGINAL NEWS STORY:
U.S. Senator to Introduce VET AI Act to Establish Independent Verification for AI Companies
U.S. Senator John Hickenlooper, Chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, announced plans to introduce the Validation and Evaluation for Trustworthy (VET) AI Act. This proposed legislation aims to create a framework for independent verification of AI systems, ensuring they meet established safety and ethical standards. The bill directs the National Institute of Standards and Technology (NIST) to collaborate with federal agencies, industry stakeholders, academia, and civil society to develop detailed guidelines for the certification of third-party evaluators.
“AI is moving faster than any of us thought it would two years ago,” said Hickenlooper. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”
Why the VET AI Act Matters
At present, AI companies make claims about safety testing, risk management, and training practices without external verification. The VET AI Act would allow neutral third parties to evaluate those claims. In this way, it mirrors the role of financial auditors. Hickenlooper first outlined this approach in his “Trust, but Verify Framework” speech at Silicon Flatirons in February. He argued that AI auditing standards will increase transparency, protect consumers, and promote responsible adoption. He also called for national privacy legislation to strengthen protections for Americans’ data.
NIST’s Role
The VET AI Act would direct NIST, working with the Department of Energy and the National Science Foundation, to create voluntary specifications. These guidelines would address both internal and external assurance practices. Key factors include:
-
Data privacy protections
-
Mitigation of potential harms
-
Dataset quality
-
Governance and communication across the AI lifecycle
The bill also calls for an Advisory Committee. This group would recommend criteria for certifying individuals and organizations that perform AI assurance. The goal is to ensure evaluators have both expertise and credibility.
Market Study and Next Steps
NIST would also conduct a study of the current AI assurance ecosystem. The study would assess existing methodologies, identify gaps in facilities and resources, and evaluate market demand for assurance services. The results would inform a stronger, more effective framework. Although the final bill text has yet to be released, Hickenlooper’s proposal addresses the speed of AI development. It also underscores the need for timely regulation. By promoting independent verification, the VET AI Act aims to build trust and protect consumers from the risks of unchecked AI systems.
Need Help?
If you’re wondering how Hickenlooper’s bill, or any other AI bill around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.