A new report by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute reveals that artificial intelligence is transforming the landscape of serious online crime, posing urgent challenges for law enforcement and national security. Titled, “AI and Serious Online Crime,” the report paints a stark picture of how generative AI, large language models (LLMs), and deepfakes are being harnessed by cybercriminals at a scale and speed previously unseen.
The research, based on interviews, expert workshops, and AI-driven simulations, identifies a growing ecosystem of AI-enabled criminality—ranging from phishing and malware distribution to synthetic child sexual abuse material (CSAM), romance scams, and AI-assisted financial fraud. The report warns that the threat is no longer theoretical: AI systems are already generating real-world harms.
“We’ve entered a new phase where AI isn’t just a tool—it’s an accelerant,” said co-author Professor Joe Burton. “Criminals are exploiting AI to increase scale, lower barriers to entry, and innovate with alarming efficiency.”
Among the key findings, CETaS researchers cite a rapid rise in AI-generated content used to deceive, manipulate, and extort victims. One high-profile example involved a British multinational in Hong Kong losing £20 million to a fraud ring using AI-generated deepfakes of company executives. Other case studies spotlight the surge in synthetic CSAM and AI-powered romance scams, where deepfake personas and chatbots lure victims into financially and emotionally exploitative relationships.
The report also highlights the rise of criminal tools like WormGPT and FraudGPT—LLMs optimized for phishing, fraud, and spear-phishing—available on dark web forums and Telegram channels. Meanwhile, attackers increasingly attempt to jailbreak commercial AI systems, remove safety guardrails, and fine-tune models for malicious use.
CETaS warns that UK law enforcement is not adequately equipped to counter this wave of AI-enabled crime. Current capabilities are fragmented, underfunded, and often lag behind the pace of innovation seen in criminal groups. To respond, the report calls for the immediate creation of an AI Crime Taskforce within the UK’s National Cyber Crime Unit, backed by dedicated Home Office funding.
The report recommends proactive measures including:
- The deployment of AI by police and intelligence agencies to counter AI-enabled crimes.
- Establishment of international working groups within Europol to share threat intelligence.
- Centralized tracking of AI tools misused for criminal purposes.
- Investment in AI security testing to reduce model compliance with harmful prompts, particularly in the fraud domain.
Law enforcement, the report argues, must move beyond reaction and begin to “fight AI with AI.” This includes scaling technical countermeasures, integrating AI expertise across departments, and ensuring compatibility with evolving global standards.
But while the report acknowledges the promise of defensive AI systems—such as those detecting deepfakes and enhancing cyber resilience—it cautions that regulation alone cannot contain the threat. As one interviewee put it, “criminals don’t care about regulation.” Instead, experts emphasize the need for multi-agency collaboration, robust training, and strategic investments in AI infrastructure.
Without swift, coordinated action, the report warns, AI could continue to shift the balance of power toward cybercriminals—leading to higher financial losses, more severe personal harm, and an erosion of public trust in digital systems. The window for proactive disruption, researchers conclude, is rapidly closing.
Need Help?
If you have questions or concerns about any AI laws, reports, guidelines, and regulations, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.