Big Picture: The AI Act’s Approach and Purpose
The European Commission’s guidance on the AI Act recognizes that artificial intelligence can greatly enhance productivity and innovation—yet it must also respect people’s rights and freedoms. Here, we’ll explore how the AI Act identifies and bans certain practices that directly threaten fundamental rights, why they’re considered unacceptable, and how these prohibitions will play out in practice.
General Purpose of the AI Act
The AI Act imposes rules on providers and deployers of AI systems in the EU, aiming to ensure a high level of fundamental rights protection—health, safety, privacy, non-discrimination—while fostering innovation. It is built on a risk-based approach. Certain “unacceptable” uses are banned outright (Article 5), “high-risk” uses get stringent requirements, “limited-risk” uses have transparency duties, and minimal-risk uses can remain largely unregulated (though still subject to voluntary codes of conduct).
In the Commission’s view, this layered approach helps balance innovation with safeguarding rights. The “unacceptable risk” category is narrow and specifically enumerated so that it addresses uses deemed fundamentally incompatible with EU values.
Why Some AI Practices are Prohibited
Certain AI practices, by their nature, can severely interfere with fundamental rights and EU values like human dignity, equality, and democracy. The Commission’s guidelines interpret and clarify these prohibitions so that all Member States enforce them uniformly. The goal is to prevent abuses that undermine personal autonomy, privacy, or equitable treatment—no matter how technologically advanced the system.
Structure of the Guidelines
Within Article 5, the AI Act singles out several distinct categories of prohibited practices. The Commission’s guidance explains the rationale for each:
• Article 5(1)(a) & (b): bans certain manipulative, deceptive, or exploitative AI uses that “materially distort” someone’s behavior and lead to significant harm.
• Article 5(1)(c): bans social scoring that leads to unjust or disproportionate detrimental treatment.
• Article 5(1)(d): bans criminal offence risk assessments or predictions based solely on a person’s profiling/personality traits.
• Article 5(1)(e): bans untargeted scraping of facial images from internet/CCTV to build face recognition databases.
• Article 5(1)(f): bans emotion recognition in workplace/education (except for safety/medical reasons).
• Article 5(1)(g): bans biometric categorization that infers someone’s race, religion, political opinions, sexual orientation, etc.
• Article 5(1)(h): bans real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, with narrow exceptions (kidnapping victims, imminent threats, serious crimes).
Enforcement & Timing
Prohibitions under Article 5 apply six months after the AI Act’s entry into force (i.e. starting 2 February 2025). Detailed rules on enforcement, penalties, and market surveillance authorities apply later (from August 2025), but the prohibitions themselves are directly enforceable as of 2 February 2025. Member States can pass more restrictive national rules if they wish.
Even though the Act staggers enforcement deadlines, operators and developers must be ready: after February 2025, they could face legal challenges if they offer or deploy systems covered by Article 5’s prohibitions—even if formal national enforcement is still gearing up.
Overview of All Prohibited AI Practices (Article 5)
The Act’s Article 5 enumerates eight sub-paragraphs of unacceptable uses. Below is a quick overview, later expanded article by article:
(a) Manipulative/Deceptive Techniques Causing Significant Harm
Subliminal or purposefully manipulative or deceptive AI that “materially distorts” a person’s behavior in ways that result in or are likely to result in significant harm.
(b) Exploitation of Vulnerable Groups
AI systems that exploit vulnerabilities due to age, disability, or socio-economic situation, again leading to or risking significant harm.
(c) Social Scoring
AI-based social scoring that leads to unjustified or disproportionate detrimental treatment in an unrelated context, e.g. combining data from one context and using it in another to punish or exclude people.
(d) Crime Risk Prediction Solely Based on Profiling
Assessing the likelihood of a natural person’s future criminal offending solely using their personality traits or profiling—unless it’s to support an existing investigation based on objective facts.
(e) Building Face Recognition Databases via Untargeted Scraping
Scraping the internet or large CCTV footage sets in an indiscriminate (“vacuum cleaner”) way to gather masses of facial images to create or expand face-recognition databases.
(f) Emotion Recognition at Work or Education
Inferring or identifying emotions of people at workplaces or schools, with an exception if used strictly for medical or safety reasons.
(g) Biometric Categorization of Sensitive Traits
Categorizing people individually by analyzing their biometric data to infer “sensitive” traits such as race, political opinions, religion, sexual orientation, etc. (Exception: labeling lawfully acquired biometric data sets for legitimate safety or data-quality reasons.)
(h) Real-Time Remote Biometric Identification for Law Enforcement
Real-time RBC in publicly accessible spaces is banned for police or other law enforcement unless it’s strictly necessary (kidnapping victims, imminent serious threat, or certain serious crimes listed in Annex II), with prior authorization from a judge or independent body.
Article-by-Article Breakdown
Here we dive deeper into each category, revealing why the law takes such a strong stance and when certain borderline cases might arise.
Articles 5(1)(a) & 5(1)(b) – Harmful Manipulation & Exploitation
Reasoning: Protects autonomy, dignity, and free will.
• (a) Bans AI that covertly manipulates or deceives people (subliminal cues, deceptive chatbots, manipulative recommendation loops) and is likely to cause significant harm (e.g. mental distress, self-harm, or large financial losses).
• Key conditions: (1) technique is subliminal/manipulative/deceptive; (2) it distorts a person’s ability to make an informed decision, (3) resulting in or likely to cause significant harm.
• (b) Focuses on especially vulnerable groups—children, disabled people, or those in a precarious socio-economic context—and bars exploitative AI that distorts their behavior and leads to serious harms.
In-scope vs Out-of-scope:
Out-of-scope examples include benign or moderate nudges, lawful persuasion, or manipulative techniques that do not lead to serious harm. For example, a therapy chatbot that subliminally encourages healthy habits might be fine if it’s beneficial and not likely to result in big harm. On the other hand, the Commission highlights that if the covert influence is likely to harm users significantly (especially the vulnerable), it moves into clearly banned territory.
Article 5(1)(c) – Social Scoring
Reasoning: Prevents mass “social credit” style systems that rank or penalize individuals beyond permissible contexts. The EU is especially wary of “general-purpose” data used across different contexts, leading to widespread surveillance or social pressure.
Practice Banned: Evaluating natural persons or groups over time based on social behavior or personal characteristics and using that score to penalize them in contexts unrelated to the data’s origin, or in ways that are disproportionate.
Examples
• Banned: A municipality collecting data on residents’ library returns, volunteering, or social media behavior and tying that to, say, loan approvals or job opportunities.
• Allowed: Genuine credit-scoring for financial risk using only relevant financial data, subject to consumer & data-protection law (no unscrupulous mixing with irrelevant data from social media).
The Commission underscores that “social scoring” often hints at broader social control, so the ban aims to stop the first signs of such a practice.
Article 5(1)(d) – Crime Risk Assessment Based Solely on Profiling
Reasoning: People must be judged on actual behavior, not solely on algorithmic “predictions” about future offending using personality traits.
Prohibition: Bans placing or using an AI system that only uses personal traits and profiling to predict who might commit a crime. If combined with objective evidence from an existing criminal investigation, then it’s not “solely” profiling and thus not automatically banned. But it becomes a “high-risk” system.
Out-of-scope:
• “Place-based” or location-based predictive policing (predicting high-crime areas) is not automatically banned (though still high-risk), as it is not about singling out an individual’s predicted criminality.
In practice, the ban reflects the EU’s emphasis that AI tools should not replace due legal process, nor label individuals “likely to offend” absent any specific, verifiable evidence.
Article 5(1)(e) – Untargeted Scraping of Facial Images
Reasoning: Protects privacy and prevents creation of massive, indiscriminate face databases from social media or CCTV.
Ban: Building or expanding face-recognition databases via indiscriminate scraping (no regard to user consent or relevance). Untargeted means “vacuuming” large volumes of images or footage to turn it into a database.
Out-of-scope:
• Targeted scraping (e.g., searching for a missing specific person) is not “untargeted.”
• Datasets that do not serve identification (e.g., training a generative model on synthetic faces) may not be covered by this ban, though other rules like GDPR can still apply.
The Commission points out that what matters is whether the system is “reasonably likely” to transform into face-recognition capabilities—especially if personal data is grabbed from countless unsuspecting users.
Article 5(1)(f) – Emotion Recognition at Work & School
Reasoning: AI-based “emotion detection” is considered scientifically dubious, intrusive, and fosters an imbalance of power in workplaces/schools.
Ban: No placing/using AI to “infer emotions” in workplaces or educational institutions, except for strictly medical or safety uses. If used to check pilot fatigue or help a special-needs child interpret emotional cues in therapy, that might be allowed if it is truly for safety/medical reasons.
Out-of-scope:
• Commercial uses of emotion recognition (e.g. in marketing) are not prohibited, but still classified as “high-risk.”
The Commission notes that emotion recognition can be misleading, encourage constant surveillance, or create undue pressure on employees/students who fear being monitored for intangible mood changes.
Article 5(1)(g) – Biometric Categorization by Sensitive Traits
• Reasoning: A system that scans your face or gait and infers religion or sexual orientation can easily lead to severe discrimination or abuse.
• Ban: Biometric categorization that deduces or infers race, political opinions, sexual orientation, etc. on an individual basis.
• Exception for mere labeling or filtering of lawfully acquired data sets (e.g. to ensure balanced demographic coverage in training).
In the EU’s view, such categorization crosses ethical lines and can quickly become a tool of profiling or harassment, hence the outright ban.
Article 5(1)(h) – Real-Time Remote Biometric Identification (RBI) for Law Enforcement
• Reasoning: Real-time, mass face recognition in public spaces is seen as a major threat to privacy, autonomy, and freedom of assembly.
• Prohibition: Using real-time RBI in publicly accessible spaces for law enforcement is banned outright, with only three narrowly defined exceptions:
1. Targeted search for specific victims (kidnapping, human trafficking, etc.) or missing persons.
2. Prevention of imminent life-threatening dangers, e.g. a credible terrorist attack or an active shooter.
3. Identification/localization of suspects of certain serious crimes (listed in Annex II) punishable by 4+ years.
Each individual use must get prior authorization by a judicial/independent authority, must be “strictly necessary,” and it must be limited in time, place, and scope. Mass or arbitrary real-time face matching is banned.
Implementation, Enforcement, and National Laws
As the AI Act’s bans come into force, providers and deployers must also understand how the Commission, along with national authorities, will ensure compliance.
Market Surveillance Authorities
• National authorities monitor compliance with the AI Act. They can act on their own or on complaints.
• Prohibitions in Article 5 have the highest penalty tier (up to 7% of worldwide annual turnover for companies).
• Between 2 February 2025 (when bans start) and August 2025 (when enforcement provisions begin), the prohibitions are still in effect—operators can be taken to court even if the new “market surveillance authority” system is not yet fully in place.
Interaction with Other EU Laws
• The AI Act is designed to complement data protection (GDPR, LED) and consumer protection (UCPD, Digital Services Act, etc.). The prohibited practices are narrower in scope but more absolute.
• Data protection rules still apply in full. Indeed, many of these banned practices would also breach data protection law or anti-discrimination law.
This means if an AI practice is not outright prohibited, it might still be disallowed or heavily restricted under data protection, anti-discrimination, or sector-specific rules.
Member State Laws
• For the real-time RBI exception (Article 5(1)(h)), each Member State that wishes to allow any of these exceptions must adopt or update its own laws. The laws must detail exactly how law enforcement can request authorization, what serious crimes are covered, etc.
• Member States may choose to disallow these exceptions altogether, imposing stricter bans on RBI.
This creates room for variations across the EU, but the Commission guidelines ensure a common minimum standard.
Fundamental Rights Impact Assessment (FRIA)
• For any allowed exception to Article 5(1)(h), the deploying law enforcement must carry out a “FRIA” (Fundamental Rights Impact Assessment). This is separate from (though complementary to) standard Data Protection Impact Assessments (DPIA).
• FRIA systematically reviews how severely fundamental rights are impacted and what mitigations are in place—particularly for real-time biometric identification in public spaces.
The FRIA process draws explicit attention to the consequences for privacy, potential discrimination, and freedom of assembly before any real-time RBI use can begin.
Timing
• Article 5 bans take effect on 2 February 2025.
• Organizations have six months from the date the Act entered into force to ensure they stop or retool any system that violates Article 5.
• However, the relevant official “market surveillance” enforcement structures begin on 2 August 2025.
• Providers and deployers should still ensure compliance right away—courts can enforce it or individuals can claim violations even before the official market surveillance mechanism is fully in place.
Key Takeaways and Next Steps
Below are some core lessons the Commission highlights for all AI stakeholders—providers, deployers, and end users:
Prohibited Means Banned at Market & Use Levels
If a practice is in Article 5, it cannot be placed on the EU market, put into service, or used. Providers can’t offer it, and deployers can’t adopt or run it. Some exceptions exist only for real-time RBI (Article 5(1)(h)) under extremely narrow conditions.
High-Risk vs. Prohibited
Many AI applications are “high-risk” but still permissible if they follow the Act’s requirements. By contrast, Article 5’s list is absolutely disallowed (or partially allowed only in narrow scenarios). The Commission emphasizes that meeting “high-risk” obligations isn’t enough if the system is actually part of a banned practice.
Practical Advice
Vendors (providers) must do careful due diligence to ensure none of their systems can be reasonably foreseen to be used in a banned manner.
Deployers must ensure actual usage does not contravene Article 5. Even if a contract says “don’t use for that purpose,” deployers remain liable for how they deploy the system.
Looking Ahead
• The Commission will regularly update these Guidelines if technology or case law changes.
• Member States must pass or amend laws if they want to allow the narrow real-time RBI exceptions. They must also set up or designate authorities to grant or refuse authorizations for each use.
For companies, it’s crucial to monitor how national laws and guidance evolve. In some countries, biometrics or “high-risk uses” might be curtailed even further; in others, certain exceptions for law enforcement might become possible—but always under tight controls.
Use Case Examples: Borderline Scenarios
Finally, let’s illustrate how these prohibitions can play out in practice, especially when an AI system sits on the borderline of what Article 5 disallows. These examples come from real-life contexts where developers or deployers must stop and ask: “Are we nearing a prohibited practice?”
1. Harmful Manipulation & Deception
Title: Subliminal Fitness Coach
Context:
A lifestyle-coaching AI app nudges users to buy “premium” workout gear and supplements. It flashes pop-ups or quick messaging that some might consider subliminal.
Why It’s Borderline:
It may “materially distort” decisions if the user is unaware of the subtle influence. Harm might not be obviously severe, but it could be if vulnerable users overspend or make unhealthy choices under pressure.
Extra Caution:
Providers should review whether the tactics risk significant psychological or financial harm. Deployers should confirm the nudge methods are transparent and fully within users’ conscious awareness.
2. Exploitation of Vulnerabilities
Borderline Use Case
Title: Seniors’ Language Chatbot
Context:
A freemium language tutor chatbot pushes older adults toward high-priced subscription plans. Continual “personalized offers” exploit social isolation or digital inexperience.
Why It’s Borderline:
The AI could be leveraging age-related or socio-economic vulnerabilities. Unclear whether the financial harm is “significant” or merely annoying upselling.
Extra Caution:
If older users end up in debt or psychologically distressed, the system risks violating the ban. Developers should ensure marketing/persuasion is ethically balanced and non-exploitative.
3. Social Scoring
Borderline Use Case
Title: Community Engagement Points
Context:
A municipality’s app assigns “civic rating” points for volunteer hours or social behavior. These scores might influence priority for certain services, e.g., event tickets or small grants.
Why It’s Borderline:
Potentially uses data from multiple contexts (social media or local groups) to create a general “social score.” Could lead to disproportionate or unfair treatment if the city denies some benefits based on low “civic rating.”
Extra Caution:
Must verify that the final usage is not “unrelated” to the data’s source. If the points meaningfully penalize people out of proportion, it veers into prohibited social scoring.
4. Crime Risk Prediction Solely Based on Profiling
Borderline Use Case
Title: Retail Theft Risk Tool
Context:
A private security AI tries to flag potential shoplifters based on “impulsivity” or “suspected stress level.” Relies mostly on personal traits and estimated socio-economic factors.
Why It’s Borderline:
If the AI uses only personality traits or profiling—without objective crime links—this may be “solely based on profiling.” Could step into law enforcement tasks if the retailer cooperates with police.
Extra Caution:
Providers should see if actual facts (past theft history) factor in, not just personality inferences. If no real evidence is used, the tool risks crossing Article 5(1)(d).
5. Untargeted Scraping of Facial Images
Borderline Use Case
Title: General “Face Data” Training Set
Context:
A startup scrapes thousands of random profile photos from public posts to build an AI detection model. This database could be repurposed for face recognition in the future.
Why It’s Borderline:
“Untargeted” harvesting of images might be seen as building or expanding a recognition database. They claim it’s for “face detection,” not “face ID,” but the line can blur.
Extra Caution:
If the scraping can foreseeably become a face recognition product, it may violate the ban. Ensure technical and contractual measures limit usage to detection-only (with user consent, if relevant).
6. Emotion Recognition at Work or School
Borderline Use Case
Title: Workplace Stress Monitor
Context:
An AI system for shift scheduling that interprets workers’ micro-expressions and vocal tone to detect “negativity.” The employer uses the results in performance assessments.
Why It’s Borderline:
This might go beyond mere “fatigue detection” and effectively do emotion recognition, which is banned in workplaces unless for strict safety/medical ends. The employer’s reasons are partly productivity/HR decisions, not pure safety.
Extra Caution:
If it’s indeed inferring anger, frustration, or sadness, the system likely violates Article 5(1)(f). The bar for “safety reasons” is narrow—this borderline scenario calls for a thorough legal check.
7. Biometric Categorization of Sensitive Traits
Borderline Use Case
Title: Mall Demographic Camera
Context:
A camera system labels passersby by age bracket, hair color, approximate ethnicity, or religious clothing for marketing metrics.The data is stored at the individual level and can identify repeated visits or patterns.
Why It’s Borderline:
Inferring religion or ethnicity from clothing or facial structure enters “sensitive trait” territory. The borderline question is whether it’s purely “labeling data sets” or actually categorizing real individuals by race/religion.
Extra Caution:
If the system lumps individuals by any protected characteristic, it’s likely illegal. Distinguish legitimate marketing segmentation (age range) from suspect “sensitive trait” categorization.
8. Real-Time Remote Biometric Identification (RBI) in Public
Borderline Use Case
Title: Citywide “Watchlist” Cameras
Context:
Police in a busy downtown area run real-time face scans on passersby to see if they match any known watchlist. The watchlist includes everything from petty theft suspects to missing people.
Why It’s Borderline:
Article 5(1)(h) exceptions are strictly for specific serious crimes, imminent threats, or finding specific missing persons. Using it as a broad indefinite search (covering lesser offences) is likely prohibited.
Extra Caution:
The police or vendor must check if the watchlist focuses on allowed exceptions (terrorism, kidnapping, etc.). Sweeping usage is disallowed—any broader approach triggers the ban.
Final Thoughts
These scenarios show how an AI application can drift close to an outright ban if it crosses certain lines—like exploiting vulnerable users, scraping facial images en masse, or imposing hidden manipulations. The European Commission’s guidelines continually emphasize proportionality (is the measure out of scale or used in the wrong context?), necessity (could less invasive alternatives work?), and respect for fundamental rights (dignity, privacy, data protection, etc.).
For AI developers, risk managers, and compliance teams, these distinctions are vital. Even if your application doesn’t immediately appear “unacceptable,” it can enter a grey zone if you expand functionalities (like turning face detection into full face recognition) or apply an AI tool in an unintended domain. Keeping track of these lines—and ensuring your product or service stays clear of them—will be essential as the EU AI Act’s prohibitions become fully enforceable.
The Modulos AI Governance Platform is designed to help you manage your use of AI – whether as a provider or as a deployer – and navigate the risks from them.