New Report Urges EU to Clarify Governance of AI Agents Under AI Act

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 06/06/2025
In News

A new policy report from the Centre for European Policy Studies (CEPS) is calling on the European Union to clarify how its flagship AI regulation applies to AI agents, such as autonomous systems capable of initiating actions without direct user prompting. Titled “Ahead of the Curve: Governing AI Agents Under the EU AI Act,” the report warns that the EU’s current regulatory framework may not sufficiently address the risks posed by this rapidly evolving class of artificial intelligence.

 

The EU AI Act, adopted in 2024 and set to take full effect in 2026, is designed to ensure that AI systems placed on the EU market are safe and respect fundamental rights. However, CEPS researchers argue that the legislation is not yet equipped to govern AI agents that exhibit high levels of autonomy, adaptability, or interaction with their environments—traits associated with systems like AI-powered personal assistants, financial trading bots, and virtual employees.

 

The report states that AI agents challenge traditional assumptions in the EU AI Act, particularly regarding who is considered the ‘provider’ of a system and how responsibility is assigned. Because these systems can generate new goals and take independent actions post-deployment, it is unclear whether they fall under the scope of the original provider’s responsibility or if a new governance mechanism is needed.

 

The report identifies a risk that AI agents may escape regulation altogether or lead to fragmented enforcement across the EU. It proposes that the European Commission issue interpretive guidance or delegated acts to clarify how AI agents should be categorized, risk-rated, and monitored. This is especially urgent given the anticipated rise of AI agents across sectors ranging from healthcare and education to defense and public administration.

 

CEPS also urges regulators to rethink how conformity assessments and post-market monitoring obligations should apply when an AI system evolves after deployment. Traditional regulatory approaches that focus on pre-market compliance may be insufficient when dealing with agents that learn, adapt, or integrate with other systems in unpredictable ways.

 

The authors recommend a proactive governance approach, suggesting mechanisms such as dynamic risk classification, lifecycle-based oversight, and clearly defined responsibilities for users who fine-tune or deploy AI agents in new contexts.

 

They also call for expanding AI literacy among regulators, providers, and the public to ensure that accountability mechanisms keep pace with technological change. Without this, the report warns, AI agents could exacerbate risks to privacy, security, and fundamental rights—especially if they become embedded in sensitive infrastructure or decision-making processes.

 

As the EU AI Act moves toward full implementation, the report concludes that recognizing and adapting to the unique features of AI agents is critical for maintaining the law’s credibility, efficacy, and long-term relevance in a fast-evolving digital landscape.

Need Help?

 

If you’re concerned or have questions about how to navigate the EU or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter