EU Rights Agency Warns Gaps in High-Risk AI Assessments Could Undermine Fundamental Rights

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/19/2025
In News

The European Union Agency for Fundamental Rights (FRA) has warned that gaps in how high-risk artificial intelligence systems are identified and assessed could undermine fundamental rights protections under the EU’s landmark Artificial Intelligence Act, according to a new report published in 2025 .

 

The report, Assessing High-Risk Artificial Intelligence: Fundamental Rights Risks, examines how AI systems used in sensitive areas such as asylum, education, employment, law enforcement, and public benefits are being classified and evaluated in practice. Drawing on interviews with AI providers, deployers, and experts across several EU member states, the FRA found widespread uncertainty about how to interpret key provisions of the AI Act, which entered into force in August 2024.

 

One central concern is the definition of “high-risk” AI systems. While the AI Act adopts a risk-based approach, the FRA cautions that vague interpretations—particularly around exemptions known as “filters”—could allow systems with significant societal impact to avoid stricter safeguards. The report warns that even relatively simple or “preparatory” AI tools can materially influence decisions affecting people’s rights, such as access to social benefits or asylum determinations.

 

The FRA also found that many organizations deploying AI lack structured methods for assessing risks to fundamental rights beyond data protection and discrimination. Awareness of impacts on other rights, including the right to education, the presumption of innocence, or access to effective remedies, remains limited. As a result, mitigation measures are often fragmented and overly reliant on human oversight, which the report says is insufficient on its own.

 

To address these gaps, the FRA calls for clearer guidance from the European Commission, stronger oversight by independent authorities, and greater investment in evidence-based testing of AI systems. The agency argues that embedding fundamental rights assessments into AI development is not only a legal requirement but also essential for building public trust and supporting responsible innovation under the EU’s AI framework.

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter