UPDATE — SEPTEMBER 2025:
The Senate Commerce Subcommittee’s November 19, 2024 hearing on AI-enabled fraud and scams helped set the stage for legislative and regulatory activity in 2025, though progress has been mixed. The five bipartisan bills spotlighted at that session — the Future of AI Innovation Act, the VET AI Act, the AIRIA transparency bill, the COPIED Act, and the TAKE IT DOWN Act — all stalled during the 2024 lame-duck session.
Several measures resurfaced in the 119th Congress. The Future of AI Innovation Act was rolled into a broader AI Innovation and Safety Act of 2025. It remains in committee. The TAKE IT DOWN Act gained momentum after a wave of high-profile deepfake extortion cases; it passed the Senate Judiciary Committee in June 2025. It could see a floor vote later this year. The COPIED Act, addressing watermarking and provenance standards for AI-generated content, is under markup in the Commerce Committee with bipartisan support but heavy industry scrutiny. The AIRIA bill has advanced more slowly, with pushback from tech firms over mandatory reporting and transparency requirements. Meanwhile, the VET AI Act continues to be discussed in committee. It’s part of a broader push to give NIST a stronger role in AI assurance.
As of September 2025, the TAKE IT DOWN Act is furthest along legislatively. Meanwhile, provenance standards and consumer protection are being advanced more rapidly through agency action than through sweeping congressional bills.
ORIGINAL NEWS POST:
U.S. Senators Push for AI Legislation to Tackle Fraud and Scams
On November 19, the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security held a hearing focused on AI-enabled fraud and scams. Lawmakers from both parties examined how artificial intelligence is accelerating consumer harm and discussed possible legislative responses. With the lame-duck session underway, Senators stressed the urgency of action.
Subcommittee Chair John Hickenlooper (D-CO) acknowledged AI’s benefits but warned about its risks. “For all those benefits, we have to mitigate and anticipate the concurrent risks that this technology brings along with it,” he said. Ranking Member Marsha Blackburn (R-TN) echoed those concerns. She cited Federal Trade Commission data showing consumer losses rose by $1 billion in the past year, reaching $10 billion. “AI is driving a lot of this,” Blackburn said, urging Congress to act.
Expert Testimony on Deepfakes and Scams
A panel of experts and affected individuals testified on the evolving threat landscape. Witnesses included Dr. Hany Farid, a leading expert on deepfakes; Justin Brookman, Director of Technology Policy at Consumer Reports; Mounir Ibrahim, Chief Communications Officer at Truepic; and Dorota Mani, whose family experienced an AI-enabled scam.
Together, the panel described how AI tools now make fraud faster, cheaper, and harder to detect. They emphasized that existing laws and enforcement tools struggle to keep pace with these developments.
Key Areas of Concern Raised by Witnesses
Witnesses highlighted four major gaps in the current framework.
- Content Provenance: Panelists highlighted the need for metadata that indicates whether content is AI-generated. Ibrahim noted a lack of incentives for platforms to adopt such measures, calling for greater transparency.
- Comprehensive Privacy Laws: Witnesses and Senators agreed on the need for robust privacy legislation. “It should be criminal that we don’t have a data privacy law in this country,” Farid asserted.
- Corporate Accountability: The panel discussed shifting the burden of responsibility from consumers to AI developers. Farid suggested holding companies accountable for misuse, such as unauthorized voice cloning.
- Stronger Enforcement: Brookman urged Congress to empower the FTC with more resources and legal authority to combat fraud effectively.
Bipartisan AI Bills Under Review
Chair Hickenlooper highlighted five bipartisan bills introduced to address these risks.
- The Future of Artificial Intelligence Innovation Act of 2024: This bill would establish the Artificial Intelligence Safety Institute to set voluntary AI standards and test model safety across various applications.
- Validation and Evaluation for Trustworthy AI Act (VET AI Act): It would direct the National Institute of Standards and Technology (NIST) to create voluntary guidelines for internal and external AI assurance.
- Artificial Intelligence Research, Innovation, and Accountability Act (AIRIA): This act would mandate transparency in AI systems and standardize definitions and reporting for high-impact AI applications.
- The COPIED Act: Aimed at addressing deepfakes, the bill would establish AI-generated content detection standards and disclosure requirements.
- The TAKE IT DOWN Act: This legislation would criminalize the dissemination of non-consensual intimate imagery, including AI-generated deepfakes, and require social media platforms to remove such content.
Legislative Outlook
Lawmakers expressed bipartisan support for targeted AI safeguards. However, the limited time remaining in the lame-duck session raised doubts about whether these bills could pass before adjournment. Senators noted that the final weeks on the calendar would determine whether the proposals advance or carry over into the next Congress.
Need Help?
If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Hence, their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


