OpenAI Report Warns of Growing Use of AI in Influence and Cyber Operations

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/14/2024
In News

A newly released report by OpenAI, co-authored by Ben Nimmo and Michael Flossman, sheds light on the growing use of artificial intelligence (AI) in global influence and cyber operations. Published in October 2024, the report provides an in-depth analysis of how AI is being increasingly leveraged by malicious actors to manipulate online spaces, amplify disinformation, and conduct covert operations across multiple platforms. The findings reveal that AI has become a double-edged sword, being used both to detect harmful activity and to conduct large-scale influence operations that pose serious global risks.

 

The OpenAI report highlights that AI is playing an increasingly central role in influence operations, allowing threat actors to scale their activities quickly and target broader audiences. One of the key examples discussed in the report is the operation dubbed “Stop News,” originating from Russia. This operation targeted audiences in the UK, West Africa, and Russian-speaking regions by creating content in English and French. Using AI-driven tools, the operation generated short comments and long-form articles that were disseminated across social media platforms and websites. Despite its large-scale efforts, engagement with the content remained low, with most posts receiving little to no interaction.

 

However, the report emphasizes that low engagement does not diminish the potential impact of such operations. In the case of “Stop News,” the operation managed to form partnerships with local organizations, further extending its influence despite the lack of direct engagement on social media. This illustrates the potential reach of AI-generated content, even when it appears ineffective at first glance.

 

The report also details another operation from Iran, codenamed “STORM-2035,” which focused on influencing the U.S. elections. This operation used AI models to generate politically charged content in English and Spanish, which was then disseminated across platforms like X (formerly Twitter) and Instagram. The operation sought to manipulate discussions around U.S. elections, as well as to influence specific communities, such as Venezuelan and Latinx populations. The report underscores that AI allows these operations to produce content at a rapid pace, making it easier for threat actors to flood digital platforms with disinformation and politically motivated messaging.

 

The OpenAI report underscores the paradoxical role of AI in influence operations. While AI plays a crucial role in helping tech companies and governments detect and disrupt harmful activities, it also provides threat actors with powerful tools to expand and automate their influence campaigns. The report indicates that AI is often used in the intermediate stages of these operations, allowing bad actors to generate content, manipulate trends, and spam targeted hashtags with little human intervention.

 

One of the report’s key findings is that AI-generated short comments are commonly used to manipulate public discourse. Even though these tactics often result in low engagement, the sheer volume of content produced can have a significant impact by shaping online discussions and elevating misinformation.

 

The report warns of increasing risks associated with more sophisticated AI-driven influence operations. The use of AI to create deepfakes, combined with advanced language models, raises concerns about the ability of AI-generated content to become even more persuasive and difficult to detect. This could further blur the line between genuine and artificially created information, complicating efforts to counter disinformation campaigns.

 

To address these emerging threats, the OpenAI report recommends a multi-layered approach involving collaboration between governments, tech companies, and AI developers. The authors stress the importance of continuing to develop advanced AI detection systems that can keep pace with the evolving tactics used by threat actors. Additionally, the report calls for stronger international cooperation to prevent state-sponsored actors from abusing AI technologies for covert operations.

 

 

Need Help?

 

Keeping track of all the AI regulations, laws and other policies around the globe can be difficult, especially when they can impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter