Dutch Regulators Call for Clarity on AI Emotion Recognition Ban in Workplaces and Schools

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/25/2025
In News

The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) has released a report summarizing public feedback on the prohibition of AI-based emotion recognition in workplaces and educational institutions under the EU AI Act. The document, published in February 2025, highlights concerns over the use of artificial intelligence (AI) to infer emotions and intentions, citing risks related to privacy, discrimination, and scientific validity.  

 

Emotion recognition technology, which analyzes facial expressions, voice patterns, and physiological signals to determine emotional states, has grown increasingly sophisticated. However, the AP report stresses that its reliability remains questionable, and its use in professional or academic settings raises ethical and legal issues.  

 

The AP’s review of public responses identified three major concerns:  

 

  1. Growing Use of Emotion Recognition Technology: AI systems designed to infer emotions are being integrated into a range of applications, from job interviews and classroom monitoring to customer service interactions. Respondents expressed concerns about the technology’s accuracy, the potential for misinterpretation, and its intrusive nature. While some see potential benefits—such as monitoring stress levels or enhancing personalized learning—many worry about privacy and ethical implications.  

 

  1. Legal and Conceptual Ambiguity: The EU AI Act includes a prohibition on emotion recognition in workplaces and education, but respondents pointed out that the legislation lacks clear definitions. Key terms such as “emotions,” “intentions,” and “physical states” remain vaguely defined, leading to uncertainty about which applications are banned and which are permitted. For example, it is unclear whether AI systems that analyze facial expressions to detect fatigue or stress in employees fall under the ban.  

 

  1. Confusion Over Biometric Data Regulations: The EU AI Act’s definition of biometric data differs from that in the EU’s General Data Protection Regulation (GDPR), leading to uncertainty about how emotion recognition systems should be regulated. Some AI applications use biometric indicators—such as heart rate or pupil dilation—to infer emotions, raising questions about whether these fall under the EU AI Act’s restrictions.  

 

The AP report underscores the need for more precise guidelines to help businesses, educators, and developers understand the boundaries of the EU AI Act’s prohibition. The European Commission has issued guidelines on prohibited AI practices, but respondents found them insufficiently detailed, particularly in distinguishing between allowed and banned applications.  

 

Additionally, the report highlights the need for stricter enforcement mechanisms. While the EU AI Act bans emotion recognition in specific settings, the technology is still allowed in other contexts, such as customer interactions or public surveillance, as long as it complies with GDPR and AI risk classification rules.  

 

Dutch regulators plan to work with European authorities to refine the implementation of the EU AI Act’s provisions. The AP will contribute to discussions on regulatory interpretations and collaborate with the Dutch Cooperation Platform of Digital Supervisory Authorities to establish clearer enforcement measures.  

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the EU or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter