The European Commission has approved the content of draft guidelines detailing prohibited artificial intelligence (AI) practices under the EU AI Act, marking a significant step in the EU’s effort to regulate high-risk AI technologies while promoting innovation. These guidelines, though not yet formally adopted, provide legal interpretations and practical examples to ensure the consistent enforcement of the EU AI Act.
The EU AI Act, which entered into force on August 1, 2024, introduces a risk-based classification of AI systems, with some practices deemed as posing an “unacceptable risk” to European values and fundamental rights. The newly released draft guidelines elaborate on Article 5 of the Act, which explicitly bans AI practices that could result in harmful manipulation, social scoring, or biometric surveillance.
According to the draft guidelines, AI systems that employ subliminal techniques to manipulate individuals, exploit vulnerabilities due to age or socioeconomic conditions, or conduct real-time biometric identification in public spaces for law enforcement purposes are among those strictly prohibited. Other banned practices include AI-based social scoring, which could lead to unjust discrimination, and untargeted scraping of facial images from the internet or CCTV footage.
The guidelines also clarify the legal basis for these prohibitions, stating that such AI practices directly violate the Charter of Fundamental Rights of the European Union. Specifically, they aim to protect individuals’ dignity, privacy, and non-discrimination rights while upholding democratic values.
The draft guidelines emphasize that enforcement will be handled by market surveillance authorities in each EU member state, as well as the European Data Protection Supervisor for AI applications within EU institutions. These authorities will have the power to investigate violations and impose penalties.
Violators of the EU AI Act’s prohibitions face severe consequences, with fines reaching up to €35 million or 7% of a company’s global annual revenue—whichever is higher. Public authorities found in violation may also be subject to administrative fines.
While the EU AI Act takes a strict stance on certain AI applications, the guidelines provide some exceptions. For example, real-time biometric identification may be permitted in limited cases, such as searching for victims of serious crimes or preventing imminent threats like terrorist attacks. Similarly, emotion recognition AI can be used in workplaces and educational settings, but only for medical or safety reasons.
The European Commission has stressed that these guidelines remain non-binding and that the ultimate authority for interpreting the EU AI Act lies with the Court of Justice of the European Union. However, they serve as a critical tool for companies, regulators, and policymakers in navigating AI compliance.
Need Help?
If you’re concerned or have questions about how to navigate the EU or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.