UPDATE – MAY 2025: The Protect Elections from Deceptive AI Act, introduced in 2023, appeared on the Senate Legislative Calendar but did not move forward in Congress and failed to become law. Likewise, the No Section 230 Immunity for AI Act stalled in committee. These outcomes reflect the ongoing difficulty in passing U.S. AI regulation.
ORIGINAL STORY:
Congress Looks at a Roadmap for AI
As the European Union nears completion of the Harmonised Rules on Artificial Intelligence, or the EU AI Act, the U.S. Senate is ramping up efforts to regulate artificial intelligence (AI).
Congress Debates U.S. AI Regulation and Federal Legislation
On September 12, 2023, at the Senate Judiciary Privacy, Technology and Law Subcommittee, parties from both sides of the aisle touted their plans for AI regulation. Democratic Senator Amy Klobuchar of Minnesota introduced the Protect Elections from Deceptive AI Act, which would ban the use of deceptive AI and deep fakes in political ads. While the bill would be added as an amendment to the Federal Election Campaign Act of 1971, it would be lumped in with a bipartisan roadmap introduced back in June 2023.
In that month, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the No Section 230 Immunity for AI Act. Outside of politicians, the hearing saw law professor Woodrow Harzog, Microsoft President Brad Smith and NVIDIA chief scientist William Dally.
Lawmakers Propose Roadmap for AI Oversight
A joint press release revealed lawmakers thoughts moving forward on the so-called AI Roadmap for Congress. First off, lawmakers want an independent oversight body established to license and audit companies who are developing high-risk AI systems like companies using facial recognition or large language models. On top of that, they want safety brakes for those high-risk AI uses, along with the right to human oversight like auditors. Customers would also be given control over their personal data. Lawmakers also want to ensure that there’s legal accountability for harms caused by AI. This would be done via oversight and the ability of private citizens to file lawsuits. Lawmakers want to also restrict the export of advanced AI to America’s adversaries in an effort to protect national security interests.
The press release goes on to say that they want to mandate transparency when it comes to access, data, limitations, accuracy, etc. Lawmakers add they want disclosures for users that are interacting with an AI system. That disclosure needs to have plain language disclosures. Finally, the press release says that lawmakers want to create a public database of AI systems where people can read reports about the potential harms and harmful incidents.
Closed-Door Meetings Follow Senate Hearing
The next day, Senate Majority Leader Chuck Schumer met with major tech leaders—including the CEOs of IBM, Meta, and X—alongside advocates like Tristan Harris from the Center for Humane Technology. While the forum included all 100 senators, the nine follow-up sessions remain closed to the public. Details may emerge in future bills or press releases.
Need Help Navigating AI Laws?
BABL AI offers guidance for businesses navigating this shifting AI regulatory landscape in Congress. Therefore, for questions on compliance or risk, contact BABL AI to speak with an independent auditing and governance expert.