UPDATE – MAY 2025:
The Protect Elections from Deceptive AI Act, introduced in 2023, was placed on the Senate Legislative Calendar but did not progress further and was not enacted into law. Similarly, the No Section 230 Immunity for AI Act did not advance beyond committee discussions. These developments highlight the ongoing challenges in establishing federal regulations for AI technologies.
ORIGINAL STORY:
As the European Union nears completion of the Harmonised Rules on Artificial Intelligence, or the EU AI Act, the U.S. Senate is ramping up efforts to regulate artificial intelligence (AI).
On September 12, 2023, at the Senate Judiciary Privacy, Technology and Law Subcommittee, parties from both sides of the aisle touted their plans for AI regulation. Democratic Senator Amy Klobuchar of Minnesota introduced the Protect Elections from Deceptive AI Act, which would ban the use of deceptive AI and deep fakes in political ads. While the bill would be added as an amendment to the Federal Election Campaign Act of 1971, it would be lumped in with a bipartisan roadmap introduced back in June 2023. In that month, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the No Section 230 Immunity for AI Act. Outside of politicians, the hearing saw law professor Woodrow Harzog, Microsoft President Brad Smith and NVIDIA chief scientist William Dally.
A joint press release revealed lawmakers thoughts moving forward on the so-called AI Roadmap for Congress. First off, lawmakers want an independent oversight body established to license and audit companies who are developing high-risk AI systems like companies using facial recognition or large language models. On top of that, they want safety brakes for those high-risk AI uses, along with the right to human oversight like auditors. Customers would also be given control over their personal data. Lawmakers also want to ensure that there’s legal accountability for harms caused by AI. This would be done via oversight and the ability of private citizens to file lawsuits. Lawmakers want to also restrict the export of advanced AI to America’s adversaries in an effort to protect national security interests. The press release goes on to say that they want to mandate transparency when it comes to access, data, limitations, accuracy, etc. Lawmakers add they want disclosures for users that are interacting with an AI system. That disclosure needs to have plain language disclosures. Finally, the press release says that lawmakers want to create a public database of AI systems where people can read reports about the potential harms and harmful incidents.
The following day, Democratic Senator Chuck Schumer met on Capitol Hill with several high-profile CEOs, including the CEOs of IBM, Meta, X, etc. It wasn’t just the big players, several others were brought in, like Tristan Harris, the co-founder of the Center for Humane Technology and several tech researchers. The forum, which was open to all 100 senators, is unfortunately nine closed-door sessions, so we won’t be quite sure what’s happening until a press release or bill is revealed.
BABL AI offers guidance for businesses navigating this shifting AI regulatory landscape in Congress. For questions on compliance or risk, contact BABL AI to speak with an independent auditing and governance expert.