Opinion: How AI will Effect Elections in 2024 and Beyond

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/29/2024
In Blog

What will 2024 be remembered for? The accomplishments from the Executive Order issued by United States President Joe Biden? The EU AI Act? It might also be remembered for how AI impacted elections. You’ve probably noticed how a variety of states in the U.S. have already introduced legislation dealing with AI deepfakes and political campaigns, from Hawaii to South Dakota. You also probably saw how the FCC is cracking down on AI robocalls after one robocall tried to sway voters from voting in the New Hampshire primary.

 

AI’s impact on elections has already been seen in the U.S., Russia, India, Slovakia, and others. Last year, the New York Times unofficially declared Argentina’s presidential election the first AI election. That’s because the two vying presidential campaigns used AI to generate images and videos promoting themselves or attacking their opponents. The only problem is that it wasn’t known sometimes if a campaign was actually behind an AI-generated image or video. Sometimes it was unknown who created the images or videos. Even worse, some content still hasn’t been verified as AI or real. Now we sit, two months into 2024 with over 50 countries ready to hold their first AI election. We can no longer deny the impact of AI.

 

When it comes to AI being used for political influence, 2024 could be worse than last year. That’s because collectively, the countries holding elections this year are home to more than 41 percent of the world’s population and 42 percent of global GDP. Experts have also noticed that the quality and look of generative AI has increased so dramatically that some people are having trouble distinguishing if an image is fake. It’s also easier for people to access sophisticated generative AI systems to create high quality deceptive media. This means anyone with an internet connection can create convincing, yet deceptive media. Just like the robocall in New Hampshire, deceptive AI in elections is expected to accelerate as the year goes on and will continue to accelerate beyond this year. 

 

Generative AI can be used to manipulate candidates in a variety of ways through videos, photos and audio.  On top of that, AI-powered chatbots can then promote all this AI generated misinformation on social media with near human conversational skills, further enhancing the false messaging. We’ve already seen this with the wars in Ukraine and Gaza. Those same AI tools can cause artificial induced scandals, while promoting them and circulating them even more than any one person could. Spreading misinformation and using chatbots to comment on misinformation as if it’s factual can also create the illusion of widespread disbelief. One thing to keep in mind is that these threats can come from anywhere, whether internally or externally. Compounding the issue further is when a candidate claims that real photos, videos or audio are generative AI when they’re not

 

So what can be done? There need to be safeguards in place. Global governments should work together on the threats of AI, which we’ve already seen this year. Internally, countries should begin working on how to govern AI, while minimizing its harms and taking advantage of its benefits, which we’ve also already seen this year. At a local level, election offices and other agencies in charge of election security should be working together to identify and address misleading AI. Government entities should ensure that AI is used ethically in campaigns while ensuring that the public knows when something is AI-generated. That can come in the form of AI-generated media disclosures, watermarking, digital signatures or disclaimers. Government agencies should encourage innovation in the field when it comes to deepfake detection so that bad actors are immediately spotted and punished. High-accuracy detection could be implemented, not only by the government, but also social media companies where a growing number of people consume news and information on political candidates. Since the last election cycle, the number of people who have used TikTok to get news has nearly doubled.

 

The news media should begin to educate itself on generative AI and look for ways to test deceptive media when they see it so that they can fact check any kind of generative AI. Even when ethically created generative AI is implemented, it’s important for journalists and reporters to question the reasoning behind the creation of generative AI. Just like any other story, a good journalist should question everything.

 

Informational warfare has been used by humans for centuries, whether for political gain, furthering an agenda, or pure propaganda. It’s just that the ways have changed and now someone can use AI to even resurrect dead political leaders, civil rights icons, celebrities and others to give glowing endorsements. We have to remember that AI isn’t inherently harmful or misleading. While AI can be used to misinform or confuse, it can be repurposed to increase civic engagement, like informing the populace on how to vote or where to vote. AI can be used to tackle misinformation by tagging harmful content or directing viewers to the actual truth and source of information. We can also use AI to ensure that everyone has a voice in democracy. So while 2024 may be the year AI goes after democracy, it can also be democracy’s greatest ally.

 

If you’re wondering how these regulations could impact you and your organization, when it comes to AI, don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights, and answer your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter