European Guidelines Aim to Help Equality Bodies Address AI-Driven Discrimination

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/27/2026
In News

New European policy guidelines are seeking to strengthen the ability of equality bodies and national human rights institutions to address the growing risks of discrimination linked to artificial intelligence and automated decision-making systems across Europe.

 

Published with support from the Council of Europe and the European Union, the “European Policy Guidelines on AI and Algorithm-Driven Discrimination” provide a practical roadmap for regulators and oversight institutions navigating the expanding use of AI in public and private sectors. 

 

The document arrives as public administrations increasingly deploy AI tools in areas such as welfare services, migration management, law enforcement, education, and employment. While AI systems promise efficiency gains, the report warns they also pose significant risks to fundamental rights, including equality and non-discrimination, particularly when deployed without adequate oversight. 

 

At the core of the guidelines is the EU AI Act, which establishes a risk-based framework governing AI deployment across member states. The report explains how equality bodies can use the AI Act’s provisions to monitor and address discrimination risks as governments and organizations adopt automated decision systems. 

 

The guidance focuses heavily on prohibited AI practices outlined in Article 5 of the AI Act, including systems that manipulate or deceive individuals, social scoring systems, certain predictive policing tools, and the scraping of facial images to build recognition databases. According to the guidelines, these applications conflict with EU values related to human dignity, democracy, and fundamental rights protections. 

 

Beyond outright bans, the document highlights “high-risk” AI systems — technologies used in sensitive areas such as employment, education, social welfare, and law enforcement — that must meet strict requirements around risk management, data governance, and human oversight. Equality bodies are encouraged to play a proactive role in reviewing how these systems are classified and whether organizations comply with obligations designed to prevent discrimination. 

 

The guidelines also emphasize transparency mechanisms introduced by the AI Act. New databases and registration requirements for certain high-risk systems could provide oversight institutions with clearer visibility into where AI tools are being used and how they operate. This, the report argues, could help regulators identify problematic deployments and support individuals affected by automated decisions. 

 

Enforcement is another key focus. The document outlines how equality bodies can work alongside data protection authorities, market surveillance agencies, and other regulators to coordinate investigations and remedies. Suggested actions include complaint handling, public awareness campaigns, litigation support, and collaboration with civil society groups to detect algorithmic discrimination early. 

 

In addition to the AI Act, the guidelines connect recent EU directives aimed at strengthening equality bodies themselves. These standards address mandates, independence, resourcing, and investigative powers, positioning equality institutions to better respond to emerging AI-related harms. 

 

The report also provides sector-specific analysis, highlighting areas where AI deployment may carry elevated discrimination risks, including migration and border control, employment, education, and social security systems. For each area, the document links practical use cases to relevant legal obligations under the AI Act and broader European human rights frameworks. 

 

Authors of the guidelines stress that the recommendations are meant to be adaptable across national contexts. Rather than prescribing one model, the report encourages equality bodies to use their existing mandates — including advisory roles and policy engagement — to shape how AI governance evolves within their countries. 

 

As AI adoption accelerates across Europe, the guidelines signal a broader shift toward embedding equality and non-discrimination considerations into technological oversight. The report concludes that ensuring AI systems respect human rights will depend not only on new laws, but also on empowered institutions capable of monitoring, enforcing, and shaping responsible AI deployment in practice.

 

Need Help?

 

If you’re wondering how AI policies, or any other government’s AI bill or regulation could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter