CDT Urges AI Developers to Implement Safeguards Ahead of Remaining 2024 Elections to Combat Misinformation

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/13/2024
In News

As several elections are still left to be held in 2024, including the U.S., Croatia, Tunisia, and Ghana, the Center for Democracy & Technology (CDT) has released a series of recommendations aimed at ensuring the responsible development and use of generative AI. Titled “Election Integrity Recommendations for Generative AI Developers,” the report emphasizes the critical role that AI developers play in safeguarding election integrity and preventing the spread of misinformation, a growing concern with the rise of AI-generated content.

 

Generative AI, capable of creating realistic text, images, video, and audio, poses unique risks to elections, especially with the potential for producing deceptive or misleading content. The report highlights examples of AI-generated deepfakes, such as fake robocalls impersonating political figures, images depicting false events, and fabricated videos that could influence voter behavior. Given the stakes of the upcoming election, the CDT is urging developers to take proactive steps to mitigate these risks.

 

The CDT outlines several key actions for AI developers to adopt before and during the election period. First, they recommend prohibiting the generation of realistic political images, audio, and videos that could mislead voters about election events or political figures. This includes blocking attempts to generate content that could depict fake protests, fraudulent election results, or manipulated statements from candidates. The CDT stresses the importance of preventing such content from being created at all, rather than reacting after the fact, to minimize its spread.

 

Second, the report advises developers to ban the use of generative AI for political campaign activities and advertisements. While AI has the potential to help political campaigns reach voters more effectively, the CDT raises concerns that the technology could be misused for hyper-targeted manipulation or disinformation campaigns. Generative AI could, for example, create thousands of personalized messages that distort facts or sow division among specific demographic groups. The CDT urges developers to temporarily ban these uses until more ethical guidelines can be established.

 

Another pressing issue the report addresses is the need to ensure that generative AI does not interfere with the election process itself. This includes preventing AI tools from generating content that misleads voters about when and where to vote, or incites violence against election workers. The CDT points out that the use of generative AI for these purposes could have devastating consequences, from deterring voter turnout to sparking unrest at polling stations.

 

The report also highlights the importance of transparency. Developers are encouraged to clearly label AI-generated content, making it easier for the public to distinguish between real and fabricated information. This could involve embedding watermarks in AI-generated images, videos, and audio files. In addition, AI tools should direct users to authoritative sources of election-related information, such as government websites, when answering queries about the election.

 

One of the most significant challenges identified in the report is the need for AI systems to be audited and updated regularly, especially in light of new and emerging election-related disinformation. Developers are encouraged to implement rigorous testing protocols to ensure that their models respond appropriately to election-related questions and to make these systems more transparent and accessible for external audits by researchers and civil society organizations.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter