In a report published in June 2025, OpenAI outlined its evolving approach to identifying and disrupting the malicious use of its AI technologies, highlighting several real-world case studies and detailing collaborations aimed at protecting the public from AI-driven threats.
Titled “Disrupting the Malicious Uses of AI,” the report offers a rare glimpse into how OpenAI monitors and mitigates harmful use of its models by bad actors across the web, particularly in foreign influence operations and cybercrime. It also marks a public commitment to building out what OpenAI calls its “threat intelligence and disruption” capability, designed to preemptively stop misuse before it can scale.
The report says that AI technologies are general-purpose and dual-use. This makes them powerful tools for good—but also attractive for malicious purposes. OpenAI’s threat disruption team, first formally launched in 2024, works to identify these use cases in real time and disable accounts, strengthen model safeguards, and share findings with law enforcement, industry, and civil society groups.
The report focuses on ten case studies from the past year. These include nation-state aligned actors using large language models (LLMs) to assist in writing propaganda in multiple languages, generating fake news articles and social media comments, and creating content to support influence operations. In other cases, cybercriminals used LLMs to improve the grammar and persuasiveness of phishing emails and to help write malicious scripts and code.
OpenAI claims that its investigation into these campaigns led to the takedown of hundreds of accounts and contributed to broader industry alerts and coordinated responses. In addition, it points to improvements in prompt monitoring and abuse detection as key parts of its ongoing defense strategy.
The report does not name any specific adversaries or countries behind the malicious campaigns, but it stresses the need for coordinated threat intelligence sharing across platforms and sectors. It also underscores the difficulty of attribution, noting that generative AI tools are widely accessible and easily repurposed by even low-skill threat actors.
OpenAI pledges to deepen its collaboration with public and private partners. This includes joining multilateral frameworks like the Frontier Model Forum and collaborating with entities like the Partnership on AI. It also calls for clearer norms around responsible use and accountability, and for cross-industry investment in defense mechanisms.
The report concludes with a call for transparency in AI governance. This disclosure comes as AI developers face growing pressure from regulators and civil society to address the potential for generative AI tools to enable misinformation, fraud, and other harms. While OpenAI’s report offers only a narrow slice of the threat landscape, it signals a growing willingness by the company to treat security and misuse prevention as core parts of its AI deployment strategy.
Need Help?
If you’re concerned or have questions about how to navigate the US or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.