Ofcom Unveils AI Strategy to Boost Innovation and Safeguard Consumers Across UK Communications Sectors

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 06/12/2025
In News

Ofcom has released its strategic roadmap for the use of artificial intelligence (AI), detailing how the UK’s communications regulator plans to support innovation while protecting consumers from emerging risks in a rapidly evolving tech landscape. The June 2025 publication outlines a dual approach: enabling AI adoption across regulated sectors and deploying the technology internally to improve Ofcom’s own operations.

 

From mobile networks to online safety, AI is increasingly embedded in the industries Ofcom oversees. In broadcasting, AI tools are being used to generate real-time captions, translations, and audio descriptions. In telecoms, operators are applying AI to network management and cybersecurity, while postal services are experimenting with AI-powered route optimization. Ofcom’s spectrum datasets are also being used by researchers to develop advanced AI models that improve efficiency in frequency allocation.

 

“Our regulation is technology-neutral,” the agency stated, emphasizing that companies are free to adopt AI solutions without prior approval. Still, Ofcom stressed the importance of proactive risk mitigation, particularly as AI systems are deployed in ways that can affect consumer trust and safety.

 

One area of heightened concern is deepfake content. According to Ofcom’s latest findings, two in five UK internet users aged 16 and older have encountered a deepfake—one in seven of them a sexual deepfake. Of those, 15% said it featured someone they knew, 6% believed it showed themselves, and 17% suspected the subject was under 18. To counter such harms, Ofcom is enforcing new safety-by-design obligations under the UK’s Online Safety Act, requiring platforms to assess and address risks associated with AI-generated content.

 

Internally, Ofcom is trialing over a dozen AI tools to streamline its regulatory activities. These include using generative AI to translate broadcast content more efficiently and summarizing public consultation responses to uncover key themes faster. The agency is also using AI for smarter spectrum planning, with potential to improve bandwidth capacity in densely populated areas.

 

Ofcom has built a team of over 100 technology specialists—including about 60 AI experts—and is taking a safety-first approach by piloting tools before rolling them out across departments. The agency is working closely with industry bodies and regulators, including through the Digital Regulation Cooperation Forum (DRCF), to ensure AI development aligns with national standards and safeguards.

 

The agency also highlighted collaborative initiatives such as SONIC Labs, developed with Digital Catapult, which provide real-world environments for testing AI in mobile network infrastructure, including Open RAN deployments. Additionally, Ofcom is releasing unique datasets—like those on UK spectrum use—to support academic and commercial research into AI systems.

 

While supporting AI-driven growth, Ofcom acknowledged that risks primarily fall on consumers and called for a balanced approach. “We will continue to enable innovation and growth while working to identify and mitigate the associated risks to ensure AI works in the interests of everyone,” the agency stated.

 
 

Need Help?

 

If you have questions or concerns about any UK or global AI laws, reports, guidelines, and regulations, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter