Emotion Recognition Systems Prohibited under EU AI Act

The final draft of the EU AI Act elaborates on the prohibition of real-time biometric emotion recognition systems. It stipulates that the following applications will be prohibited:

The placement on the market, putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person within the areas of workplace and educational institutions, except in cases where the use of the AI system is intended for medical or safety reasons.

AI Act final draft

The recital associated with this text explains further what kind of practices are considered here: 

The notion of an emotion recognition system for the purpose of this regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. This refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue. It refers for example to systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. It also does not include the mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions. These expressions can be basic facial expressions such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, for example a raised voice or whispering.

AI Act final draft

It’s important to note that the AI Act’s prohibition seems to focus on preventing potential misuse of emotion detection technology in sensitive areas like the workplace and education, balancing the benefits of AI with the rights and privacy of individuals. This prohibition does not encompass all possible uses of emotion detection AI but targets specific scenarios where the risk of harm or privacy infringement is deemed higher.

Applications Affected by the Prohibition

While the EU AI Act’s prohibition is not all-encompassing, it does target specific scenarios where the risk of harm or privacy infringement is deemed higher. Here are some AI use cases that might fall under this prohibition:

  • Employee Performance Monitoring: AI systems used in workplaces to monitor employee engagement and emotional states during meetings or while performing tasks. These systems could analyze facial expressions, voice tone, and other biometrics to infer emotional states like stress, satisfaction, or frustration.
  • Student Engagement Tracking in Education: Similar to employee monitoring, systems used in educational institutions to gauge student engagement or emotional responses during lectures, based on facial expressions or voice analysis.
  • AI-Based Recruitment Tools: Tools used in the hiring process that assess a candidate’s emotional state or truthfulness through analysis of facial expressions, voice modulation, or body language during interviews.
  • Customer Service Analysis Tools: AI systems that analyze customer emotions during interactions with service representatives, either in-person or in call centers, to assess service quality or customer satisfaction.
  • Emotion-Driven Personalization in Retail: Systems in retail environments that assess customer emotions to personalize marketing or sales approaches, based on analysis of facial expressions or body language.
  • Mental Health Assessment Tools: AI applications used for inferring emotional states or mental health conditions from biometric data, unless explicitly used for medical purposes as per the exception.
  • Automated Classroom Monitoring: Systems that analyze student emotions in real-time to assess engagement, understanding, or well-being during class sessions.
  • Workplace Safety Tools: Unless specifically designed for medical or safety reasons, tools that infer worker emotional states to predict potential safety incidents or stress levels.
  • AI in Public Speaking Training: Tools that analyze emotional expression in trainees during public speaking or presentation training sessions.
  • Interactive Learning Environments: AI systems in e-learning that adjust content delivery based on inferred emotional states of learners.

Compliance and Consequences

If considered a prohibited practice, existing applications have only six months after the adoption of the AI Act to either redesign their application to avoid prohibited practices, or take them off the European market. Non-compliance could lead to a fine of up to 7% of total revenue per violation.

In conclusion, the EU AI Act’s prohibition of emotion recognition systems reflects a commitment to balancing the benefits of AI with the protection of individual rights and privacy. It underscores the importance of responsible AI, AI governance, and AI regulations in ensuring ethical and responsible AI practices.