ICO Highlights AI and Biometrics as Regulatory Priorities in 2024–25 Annual Report

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/21/2025
In News

The Information Commissioner’s Office (ICO) has named AI and biometrics among its top regulatory priorities in its 2024–25 Annual Report. The report underscores the UK data watchdog’s commitment to enabling responsible innovation while protecting individual privacy.

 

Released to Parliament, the report outlines a year of rapid progress and strategic engagement with emerging technologies. The ICO’s AI and biometrics strategy aims to provide clear regulatory expectations and build public trust. It comes as organizations across the UK increasingly deploy AI systems in sensitive areas. Areas include recruitment, healthcare, and law enforcement.

 

“Our focus is on ensuring innovation in AI happens in a way that respects data protection rights,” said Information Commissioner John Edwards in his foreword. “We’re not here to hold back progress—but to make sure it happens responsibly.”

 

A major highlight of the year was the ICO’s phased rollout of guidance on generative AI. By breaking the guidance into chapters, the ICO offered developers real-time clarity on data protection expectations as models evolved. The regulator also launched a public consultation on AI transparency and issued recommendations specifically targeting AI applications in recruitment. They urge developers to tell users how their personal data is used in algorithmic decision-making.

 

The ICO’s Regulatory Sandbox program continued to offer direct support to organizations experimenting with innovative data-driven products, including AI. Participants in the Sandbox reported cost savings and increased compliance confidence. Some savings were between £100,000 and £500,000 on legal and privacy costs.

 

Internationally, the ICO coordinated with the Canadian privacy regulator in a joint investigation of genetic testing company 23andMe. Ultimately, issuing a £2.31 million fine after a data breach affecting UK citizens. It also joined the Global CAPE privacy enforcement network and signed memoranda of understanding with regulators in the U.S. and Germany.

 

The ICO’s AI-related enforcement actions included reprimanding organizations for misusing cookies. Also, those organizations failed to inform users how personal data fuels AI-based recommendation systems. The report emphasizes that enforcement is only one tool in a broader effort to guide organizations toward “privacy by design.” 

 

Looking ahead, the ICO is preparing a formal AI Code of Practice and exploring experimental regulatory approaches that would allow businesses limited flexibility to test AI innovations in real-world settings—while still complying with the UK GDPR.

 

As AI technologies continue to evolve, the ICO’s 2024–25 report positions the agency as a proactive, collaborative regulator committed to safeguarding rights in a data-driven future.

 

Need Help?

 

If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter