As the European Union moves forward with the implementation of the EU AI Act, concerns over algorithmic discrimination and its intersection with the General Data Protection Regulation (GDPR) remain unresolved. A new analysis from the European Parliamentary Research Service highlights key legal uncertainties in managing bias in AI-driven decision-making, particularly in high-risk applications such as hiring, credit scoring, and law enforcement.
The EU AI Act, which took effect in August 2024, aims to ensure AI development aligns with human rights protections, including the mitigation of discriminatory biases. Under Article 10(5) of the EU AI Act, organizations deploying high-risk AI systems are permitted to process special categories of personal data—such as race, ethnicity, or health status—strictly for bias detection and correction. However, this provision exists in tension with GDPR’s more restrictive stance on processing sensitive personal data, which requires explicit legal grounds such as individual consent or public interest.
The challenge is particularly evident in sectors like employment and finance, where AI models often make high-stakes decisions. AI-driven hiring systems, for example, have been found to exhibit biases in gender and race, while algorithmic credit scoring tools risk reinforcing economic disparities based on demographic factors. The EU AI Act seeks to enable proactive bias detection in these cases, but GDPR’s limitations on data collection may hinder comprehensive auditing efforts.
One emerging concern is whether the EU AI Act’s bias-mitigation mandate is sufficient legal justification for processing special categories of data under GDPR. While GDPR allows processing in cases of substantial public interest, it remains unclear if AI fairness initiatives qualify under this exemption. Legal scholars suggest further legislative clarity or reforms may be necessary to reconcile these frameworks.
The report also examines potential risks in other AI applications, such as autonomous vehicles and generative AI. Machine vision algorithms in self-driving cars, for instance, have shown discrepancies in detecting individuals with darker skin tones, raising safety concerns. Meanwhile, generative AI chatbots can unintentionally spread discriminatory content, underscoring the need for continuous monitoring.
Need Help?
If you’re wondering how algorithmic discrimination, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.