EU Watchdog Issues Comprehensive Guidance to Curb AI Risks Across Institutions

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/27/2025
In News

The European Data Protection Supervisor (EDPS) has released sweeping new guidance detailing how EU institutions should identify, assess, and mitigate risks arising from artificial intelligence systems that process personal data. The 55-page document, published on Nov. 11, outlines a structured, technical approach to protecting fundamental rights as public bodies increasingly adopt AI tools across operations. 

 

The guidance—“Guidance for Risk Management of Artificial Intelligence Systems”—warns that AI technologies pose “significant risks to data subjects’ fundamental rights and freedoms,” citing potential harms ranging from discrimination and bias to inaccurate automated outputs and personal data breaches. It emphasizes that EU institutions, bodies, offices, and agencies must demonstrate accountability under Regulation 2018/1725, the data-protection rulebook governing EU bodies. 

 

Building on ISO 31000 risk-management standards, the EDPS lays out a lifecycle-based methodology for assessing AI systems, covering their inception, data acquisition, development, validation, deployment, operation, and retirement. Officials are instructed to catalogue potential risks, evaluate their likelihood and severity, and apply mitigation measures proportionate to the harms identified.

 

A key theme throughout the guidance is the centrality of interpretability and explainability. The EDPS stresses that AI systems must not function as inscrutable “black boxes,” warning that opaque decision-making can undermine fairness, transparency, and accountability. It urges EU bodies to require documentation explaining model architectures, data sources, accuracy across groups, known limitations, and potential biases. Where inherent transparency is not possible, institutions should employ explainability methods such as LIME or SHAP. 

 

The guidance dedicates extensive detail to data-protection principles most at risk in AI deployments: fairness, accuracy, data minimisation, security, and data-subject rights. It outlines concrete technical risks—such as biased or unrepresentative training data, algorithmic bias, overfitting, interpretation errors, data drift, and leakage through APIs—and offers corresponding mitigation strategies ranging from dataset auditing and reweighting techniques to human-in-the-loop review, regular model validation, and robust access-control safeguards. 

 

The EDPS stresses that AI systems operating with personal data can amplify existing social and institutional biases if not rigorously managed, noting that fairness requires preventing discriminatory outputs that could adversely affect individuals or groups.

 

Importantly, the supervisor clarifies that the guidance “is not a set of compliance guidelines,” and does not replace legal obligations under EU law. Instead, it provides a structured framework to help institutions systematically identify and treat AI-related risks while retaining responsibility for their own assessments. It also notes that the document is issued in the EDPS’s role as data-protection authority—not in its new role as market-surveillance authority under the EU AI Act. 

 

With EU institutions accelerating AI adoption—from language tools to predictive analytics—the guidance marks one of the most detailed regulatory blueprints to date for ensuring that public-sector AI remains lawful, transparent, and aligned with fundamental rights.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter