As several elections are still left to be held in 2024, including the U.S., Croatia, Tunisia, and Ghana, the Center for Democracy & Technology (CDT) has released a series of recommendations aimed at ensuring the responsible development and use of generative AI. Titled “Election Integrity Recommendations for Generative AI Developers,” the report emphasizes the critical role that AI developers play in safeguarding election integrity and preventing the spread of misinformation, a growing concern with the rise of AI-generated content.
Rising Risks from Generative AI
Generative AI can create highly realistic text, images, videos, and audio. While powerful, these tools pose unique risks during elections by enabling AI-generated deepfakes and fabricated media that can mislead voters. The CDT report cites recent examples, such as fake robocalls mimicking political figures and false images depicting fabricated events. These manipulations, the report warns, could influence voter perceptions and erode confidence in democratic institutions.
Key Recommendations for Developers
The CDT outlines several actions developers should take before and during election periods to curb misuse. First, developers should block realistic political images, videos, or audio that could misrepresent election events or political figures. This includes blocking attempts to generate content that could depict fake protests, fraudulent election results, or manipulated statements from candidates. The CDT stresses the importance of preventing such content from being created at all, rather than reacting after the fact, to minimize its spread.
Second, the report advises developers to ban the use of generative AI for political campaign activities and advertisements. While AI has the potential to help political campaigns reach voters more effectively, the CDT raises concerns that the technology could be misused for hyper-targeted manipulation or disinformation campaigns. Generative AI could, for example, create thousands of personalized messages that distort facts or sow division among specific demographic groups. The CDT urges developers to temporarily ban these uses until more ethical guidelines can be established.
Another pressing issue the report addresses is the need to ensure that generative AI does not interfere with the election process itself. This includes preventing AI tools from generating content that misleads voters about when and where to vote, or incites violence against election workers. The CDT points out that the use of generative AI for these purposes could have devastating consequences, from deterring voter turnout to sparking unrest at polling stations.
Promoting Transparency and Accountability
Transparency is a cornerstone of the CDT’s recommendations. Developers should clearly label AI-generated content using techniques such as embedded watermarks or metadata, making it easier for the public to distinguish between real and synthetic information. AI tools should also direct users to election-related information, such as verified government websites, when responding to voting-related questions. This measure helps prevent accidental dissemination of false information.
The CDT further emphasizes the importance of regular audits and system updates. Developers should test their AI models for election-related vulnerabilities and allow independent researchers and civil society groups to review their systems. This openness can help identify emerging threats and strengthen public confidence in AI technology.
A Call for Responsible Innovation
The CDT’s recommendations highlight a growing expectation that AI developers share responsibility for safeguarding democratic processes. As generative AI continues to advance, the organization argues that ethical oversight and accountability must evolve alongside it. By acting early—through transparency, restrictions on political use, and continuous auditing—developers can help ensure that AI innovation strengthens democracy rather than undermines it.
Need Help?
If you have questions about how to navigate the global AI regulatory landscape, reach out to BABL AI. Their Audit Experts can provide insights to help your organization stay informed, compliant, and aligned with responsible AI practices.


