The UK’s Information Commissioner’s Office (ICO) is warning that emerging “agentic AI” systems could introduce significant data protection challenges as developers and organizations push to automate more open-ended tasks. The assessment arrives in a new Tech Futures report exploring how autonomous and semi-autonomous AI agents may evolve over the next two to five years, including applications across commerce, government, medicine, cybersecurity and consumer services.
Agentic AI combines generative AI with tools that enable real-world interaction, planning, and autonomous execution of tasks. This capability allows systems to not just generate text or images, but to browse the web, make purchases, interact with other software and, eventually, other agents. Current use cases already include research automation, coding assistance, transaction planning and customer support.
But as autonomy increases, so do concerns about privacy, accountability and governance. The ICO notes that agentic systems may operate with limited human oversight, increasing the likelihood of unexpected behaviour and making it harder to determine responsibility for harmful outcomes. “Increasing agency means that developers and deployers of agentic systems don’t have full control over the behaviour of those systems,” the report warns.
The report identifies novel data protection risks, including special category data inference, expanded automated decision-making, ambiguous purpose specification, challenges for transparency, and new cybersecurity threat vectors. The ICO also raises concerns about how personal assistants powered by agentic AI could centralize highly sensitive personal data, increasing the stakes of governance failures.
To prepare for future deployments, the ICO outlines four plausible adoption scenarios — ranging from low-capability, niche agents to ubiquitous, high-capability systems integrated across critical sectors.
While highlighting risks, the ICO also points to innovation opportunities, including privacy-first agentic controls, data protection compliance tooling, trusted computing approaches and improved benchmarking.
The regulator will now begin engaging industry through workshops, AI guidance updates, and cross-regulatory collaboration, including work with the Digital Regulation Cooperation Forum and G7 data protection authorities.
The report emphasizes that organizations deploying agentic systems remain legally responsible for data protection compliance, even as autonomy increases. It also stresses that governance frameworks built for traditional automation may not translate to multi-agent ecosystems.
The ICO said the report is not formal guidance, but an early attempt to map risks and provide foresight into a rapidly developing field. Its findings come as governments worldwide accelerate their AI regulatory agendas and private-sector adoption intensifies.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


