UPDATE — AUGUST 2025: The VET AI Act remains a current and credible legislative effort with bipartisan support. Introduced by Senators John Hickenlooper and Shelley Moore Capito in mid-2024, the bill advanced out of the Senate Commerce Committee before the close of the 118th Congress. It directs NIST to create voluntary guidelines and certification standards for third-party evaluators to verify the safety, privacy, and ethical compliance of AI systems. The bill is still under active discussion and continues to receive broad backing from AI policy experts, industry leaders, and federal agencies. Many view it as a practical step toward external AI assurance—similar to financial audits.
ORIGINAL NEWS STORY:
U.S. Senator Introduces VET AI Act to Establish Independent Verification Framework for AI Systems
U.S. Senator John Hickenlooper, chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, has introduced the Validation and Evaluation for Trustworthy (VET) AI Act. The legislation aims to create a framework for independent verification of AI systems to ensure they meet strong safety and ethical standards. Senator Lisa Murkowski co-sponsored the bill. The measure directs the National Institute of Standards and Technology (NIST) to collaborate with federal agencies, industry, academia, and civil society in developing guidelines for certifying third-party evaluators.
Addressing the Gap in External Oversight
Currently, AI companies make claims about how they train and test their models with limited external validation. The VET AI Act seeks to change that by allowing neutral evaluators—similar to auditors in finance—to verify whether companies’ practices align with federal and ethical guardrails. Independent assurance would help build public trust as Congress advances broader AI regulatory frameworks. These evaluations would focus on transparency, accountability, and ethical deployment practices.
NIST’s Role and Technical Guidance
The bill assigns NIST, in partnership with the Department of Energy and the National Science Foundation, the task of creating voluntary standards for developers and deployers. These standards will address data privacy, harm mitigation, dataset quality, and governance processes throughout the AI system lifecycle. NIST will also lead a comprehensive study to evaluate current assurance capabilities and identify gaps in expertise, infrastructure, and resources. The findings will help shape stronger, evidence-based verification models.
Advisory Committee and Certification
The legislation also calls for an Advisory Committee that will review and recommend criteria for individuals or organizations seeking AI assurance certification. The goal is to confirm that only qualified experts perform evaluations and that the process remains consistent across sectors. As a result, this committee will play a crucial role in defining what credible third-party verification should look like as AI continues to evolve.
Building a Trustworthy AI Ecosystem
Overall, the VET AI Act marks a significant step toward standardized oversight for AI systems. By establishing a reliable process for independent validation, the legislation promotes transparency and reinforces public confidence. In turn, it helps ensure that innovation continues responsibly and ethically. As AI reshapes industries worldwide, external verification will be essential to protect the public and encourage sustainable growth.
Need Help?
If you’re wondering how the VET AI Act or other AI regulations could affect your organization, contact BABL AI. Their Audit Experts can help you navigate compliance and evaluate your AI assurance options.

