Congress Looks at a Roadmap for AI

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/14/2023
In News

Congress Looks at a Roadmap for AI

While the European Union puts the finishing touches on the Harmonised Rules on Artificial Intelligence, or the EU AI Act, the United States Senate has decided it’s time to look at the development of artificial intelligence. While there’s been rumblings for years about AI, 2023 could be viewed as the last branch of a marathon race. The U.S. is throwing on some new shoes and bolting to the finish line. As they say, it’s better late than never and honestly, it’s always good to see some bipartisanship in Washington D.C.

 

On September 12, 2023, at the Senate Judiciary Privacy, Technology and Law Subcommittee, parties from both sides of the aisle touted their plans for AI regulation. Democratic Senator Amy Klobuchar of Minnesota took the wheel, introducing the Protect Elections from Deceptive AI Act, which would ban the use of deceptive AI and deep fakes in political ads. While the bill would be added as an amendment to the Federal Election Campaign Act of 1971, it would be lumped in with a bipartisan roadmap introduced back in June 2023. In that month, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the No Section 230 Immunity for AI Act. Outside of politicians, the hearing saw law professor Woodrow Harzog, Microsoft President Brad Smith and NVIDIA chief scientist William Dally.

 

When all was said and done, lawmakers released a joint press release on their thoughts moving forward. First off, lawmakers want an independent oversight body established to license and audit companies who are developing high-risk AI systems like companies using facial recognition or large language models (think ChatGPT). On top of that, they want safety brakes for those high-risk AI uses, along with the right to human oversight like auditors. Customers would also be given control over their personal data. Lawmakers also want to ensure that there’s legal accountability for harms caused by AI. This would be done via oversight and the ability of private citizens to file lawsuits. Lawmakers want to also restrict the export of advanced AI to America’s adversaries in an effort to protect national security interests. The press release goes on to say that they want to mandate transparency when it comes to access, data, limitations, accuracy, etc. Lawmakers add they want disclosures for users that are interacting with an AI system. That disclosure needs to have plain language disclosures. Finally, the press release says that lawmakers want to create a public database of AI systems where people can read reports about the potential harms and harmful incidents.

 

The following day, Democratic Senator Chuck Schumer met on Capitol Hill with several high-profile CEOs, including the CEOs of IBM, Meta, X, etc. It wasn’t just the big players, several others were brought in, like Tristan Harris, the co-founder of the Center for Humane Technology and several tech researchers. The forum, which was open to all 100 senators, is unfortunately nine closed-door sessions, so we won’t be quite sure what’s happening until a press release or bill is revealed. Of course, the media may have some insight with private contacts and sources, so information could trickle out over time. But this week in September is another stride in the run towards regulating AI in the U.S.

 

In May of this year, President Joe Biden’s White House released AI policies to promote responsible AI innovation. Other items of note in the press release include a White House update to the National AI R&D Strategic Plan, a new request for public input as well as a plan to host listening sessions, and a report by the Department of Education on risks and opportunities in AI. Over the summer, the Washington Post reported that the Federal Trades Commission (FTC) had launched a major investigation into OpenAI to see if ChatGPT was harming consumers through data collection and misinformation. Even before that, the chairperson of FTC, Lina Khan, released an op-ed in the New York Times about their approach to AI. The list goes on and on, but the important thing to realize is that the Wild West days of completely unregulated AI are coming to an end in the U.S.

 

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter