FCC Moves to Increase Transparency in AI-Generated Political Ads

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/20/2024
In News

With less than three months until the U.S. Presidential election, the Federal Communications Commission (FCC) has announced its intention to implement new disclosure requirements for AI-generated content in political ads. This proposal seeks to ensure that viewers and listeners are fully informed when artificial intelligence (AI) technology is used in advertisements on television and radio.

 

Under the proposed guidelines, any political advertisement that uses AI technology must clearly disclose this fact to the public. The FCC’s initiative comes in response to the growing potential for AI to manipulate voices and images in ways that could mislead voters and disrupt the electoral process. The new rules would not prohibit the use of AI in political ads but would mandate transparency about its usage.

 

“Today, the FCC takes a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections. We propose that political advertisements that run on television and radio should disclose whether AI is being used,” said FCC Chairwoman Jessica Rosenworcel. “There’s too much potential for AI to manipulate voices and images in political advertising to do nothing. If a candidate or issue campaign used AI to create an ad, the public has a right to know.”

 

AI has advanced to the point where it can convincingly mimic human voices and create lifelike images. This technology has already been misused in political contexts. For instance, during the New Hampshire primary election, voters received AI-generated robocalls impersonating President Biden, instructing them not to vote. According to surveys, approximately three-quarters of Americans are concerned about misleading AI-generated content. The FCC’s proposal aims to address these concerns by ensuring that AI-generated political ads are clearly identified as such, thereby helping to maintain the integrity of the electoral process.

 

The Federal Elections Commission (FEC) is also considering regulations on AI in political ads, with plans to act later this year. While the FEC oversees online advertisements for federal candidates, the FCC focuses on television and radio, covering areas outside the FEC’s jurisdiction. The FEC is considering rules regarding AI use in online advertisements for federal candidates. 

 

Nearly half of the states in the U.S. have enacted laws to regulate AI and deepfake technology in elections. These laws are often bipartisan and reflect a growing recognition of the potential threats posed by AI-generated disinformation. The FCC’s proposal seeks to bring uniformity and stability to this patchwork of state laws, establishing a consistent national standard for AI transparency in political advertising. The FCC will now seek public comments on the proposed disclosure rules. This feedback will help refine the regulations to ensure they effectively address the challenges posed by AI in political ads while fostering transparency and trust in the electoral process.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter