A newly released report by OpenAI, co-authored by Ben Nimmo and Michael Flossman, sheds light on the growing use of artificial intelligence (AI) in global influence and cyber operations. Published in October 2024, the report provides an in-depth analysis of how AI is being increasingly leveraged by malicious actors to manipulate online spaces, amplify disinformation, and conduct covert operations across multiple platforms. The findings describe AI as a double-edged sword—used to detect and counter harmful activity, but also to amplify disinformation and conduct covert digital campaigns at unprecedented scale.
How AI Fuels Modern Influence Campaigns
According to the report, AI now plays a central role in enabling threat actors to automate and expand their operations. One example, the Russian-linked campaign “Stop News,” targeted audiences in the UK, West Africa, and Russian-speaking regions. Using AI-generated text in English and French, the operation produced short comments and long-form articles distributed across social media and websites. Although engagement with these posts was low, the campaign still succeeded in forming partnerships with local organizations—extending its reach far beyond online metrics. This, the report notes, demonstrates how AI-generated content can influence public narratives even when immediate engagement appears minimal.
Case Study: Iran’s “STORM-2035” Election Operation
Another example, an Iranian operation codenamed “STORM-2035,” focused on manipulating U.S. election discussions. It relied on AI models to produce politically charged posts in English and Spanish that targeted Latinx and Venezuelan communities across X (formerly Twitter) and Instagram. By automating the creation of emotionally charged content, the campaign could flood digital platforms with partisan narratives and misinformation. OpenAI’s researchers found that AI systems enable this kind of scaling, allowing threat actors to produce vast quantities of persuasive content faster than ever before.
The Growing Threat of AI-Driven Disinformation
The report highlights how AI now shapes influence operations at nearly every stage—from content generation to audience targeting. Many bad actors use AI tools to generate comments, manipulate hashtags, or simulate public engagement with little human oversight. While each post may attract few responses, the cumulative effect can shift online discussions and distort perceptions of public opinion. Researchers also warn that more sophisticated models, including deepfake generators and large language models, are making it increasingly difficult to distinguish authentic content from artificial output. This technological evolution threatens to blur the line between truth and fabrication, complicating the fight against disinformation.
Strengthening Defenses Through Collaboration
To counter these risks, the report calls for stronger partnerships among governments, AI developers, and social media platforms. OpenAI’s authors recommend advancing AI-powered detection tools that can keep pace with evolving tactics and ensuring transparent reporting on disinformation campaigns. They also urge greater international coordination to prevent state-sponsored misuse of AI technologies. Without shared standards and rapid-response frameworks, the authors warn, malicious AI use could escalate into a persistent threat to democratic institutions and public trust.
Need Help?
Keeping track of all the AI regulations, laws and other policies around the globe can be difficult. Especially when they can impact you. Therefore, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.


