FCC Moves to Increase Transparency in AI-Generated Political Ads
With less than three months until the U.S. Presidential election, the Federal Communications Commission (FCC) has announced its intention to implement new disclosure requirements for AI-generated content in political ads. The proposal aims to ensure that voters are clearly informed whenever AI-generated content appears in television or radio advertisements.
Under the proposed guidelines, any political advertisement that uses AI technology must clearly disclose this fact to the public. The FCC’s initiative comes in response to the growing potential for AI to manipulate voices and images in ways that could mislead voters and disrupt the electoral process. The new rules would not prohibit the use of AI in political ads but would mandate transparency about its usage.
FCC’s Rationale: Protecting Voters from AI Misuse
“Today, the FCC takes a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections. We propose that political advertisements that run on television and radio should disclose whether AI is being used,” said FCC Chairwoman Jessica Rosenworcel. “There’s too much potential for AI to manipulate voices and images in political advertising to do nothing. If a candidate or issue campaign used AI to create an ad, the public has a right to know.”
AI has advanced to the point where it can convincingly mimic human voices and create lifelike images. This technology has already been misused in political contexts. During the New Hampshire primary election, for example, voters received AI-generated robocalls impersonating President Biden, urging them not to vote. Surveys show that roughly three-quarters of Americans are concerned about misleading AI-generated content. The FCC’s proposed disclosure rule is designed to address that fear by making it clear when campaign ads rely on AI.
Complementary Oversight from the FEC
The Federal Election Commission (FEC) is also reviewing new rules for AI in political advertising, particularly for online and digital platforms. While the FEC oversees ads for federal candidates on the internet, the FCC regulates television and radio. Together, the two agencies are building a coordinated federal response to AI use in campaign messaging. This cooperation shows that policymakers understand how AI can influence truth, accountability, and voter confidence. By aligning oversight, both agencies hope to preserve election integrity while still allowing responsible innovation.
Growing Bipartisan Action Across States
Nearly half of the states in the U.S. have enacted laws to regulate AI and deepfake technology in elections. These bipartisan measures aim to prevent the spread of false or manipulated media that might confuse voters. The FCC’s nationwide proposal builds on these state efforts by establishing a uniform standard for AI transparency in political advertising. The agency is now seeking public comments on its draft rules. This feedback will help refine the final policy and ensure it effectively balances transparency, free speech, and innovation.
Need Help?
If you’re navigating emerging AI advertising rules or want to understand how U.S. and international AI regulations could affect your organization, reach out to BABL AI. Their Audit Experts can help you assess compliance risks and develop trustworthy, transparent AI communication strategies.

