Back to Blog
February 5, 2025

EU AI Act prohibited practices: the complete list

By Modulos19 min read
EU AI Act prohibited practices: the complete list

Updated April 2026. The Digital Omnibus currently in trialogue includes proposals to extend Article 5 to cover additional prohibited practices, specifically AI systems generating child sexual abuse material (CSAM) and certain non-consensual intimate deepfakes. These are proposals, not enacted law. The eight Article 5 prohibitions described below are the current binding list. We will update this post once the trialogue concludes.

Big picture: the AI Act's approach and purpose

The European Commission's guidance on the EU AI Act recognises that artificial intelligence can greatly enhance productivity and innovation, yet it must also respect people's rights and freedoms. This post explains how the AI Act identifies and bans certain practices that directly threaten fundamental rights, why they are considered unacceptable, and how these prohibitions play out in practice.

General purpose of the AI Act

The AI Act imposes rules on providers and deployers of AI systems in the EU, aiming to ensure a high level of fundamental rights protection (health, safety, privacy, non-discrimination) while fostering innovation. It is built on a risk-based approach. Certain "unacceptable" uses are banned outright (Article 5), "high-risk" uses get stringent requirements, "limited-risk" uses have transparency duties, and minimal-risk uses can remain largely unregulated (though still subject to voluntary codes of conduct).

In the Commission's view, this layered approach balances innovation with safeguarding rights. The "unacceptable risk" category is narrow and specifically enumerated so that it addresses uses deemed fundamentally incompatible with EU values.

Why some AI practices are prohibited

Certain AI practices, by their nature, can severely interfere with fundamental rights and EU values like human dignity, equality, and democracy. The Commission's guidelines interpret and clarify these prohibitions so that all Member States enforce them uniformly. The goal is to prevent abuses that undermine personal autonomy, privacy, or equitable treatment, no matter how technologically advanced the system.

Structure of the guidelines

Within Article 5, the AI Act singles out eight distinct categories of prohibited practices. The Commission's guidance explains the rationale for each:

  • Article 5(1)(a) and (b): bans certain manipulative, deceptive, or exploitative AI uses that "materially distort" someone's behaviour and lead to significant harm.
  • Article 5(1)(c): bans social scoring that leads to unjust or disproportionate detrimental treatment.
  • Article 5(1)(d): bans criminal offence risk assessments or predictions based solely on a person's profiling or personality traits.
  • Article 5(1)(e): bans untargeted scraping of facial images from the internet or CCTV to build face recognition databases.
  • Article 5(1)(f): bans emotion recognition in workplaces and education (except for safety or medical reasons).
  • Article 5(1)(g): bans biometric categorisation that infers someone's race, religion, political opinions, sexual orientation, and similar traits.
  • Article 5(1)(h): bans real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, with narrow exceptions (kidnapping victims, imminent threats, serious crimes).

Enforcement and timing

Prohibitions under Article 5 apply six months after the AI Act's entry into force, meaning from 2 February 2025. Detailed rules on enforcement, penalties, and market surveillance authorities apply later (from August 2025), but the prohibitions themselves are directly enforceable from 2 February 2025. Member States can pass more restrictive national rules if they wish.

Even though the Act staggers enforcement deadlines, operators and developers must be ready. After February 2025, they could face legal challenges if they offer or deploy systems covered by Article 5's prohibitions, even if formal national enforcement is still gearing up.

Overview of all prohibited AI practices (Article 5)

The Act's Article 5 enumerates eight sub-paragraphs of unacceptable uses. Below is a quick overview, expanded article by article in the next section.

(a) Manipulative or deceptive techniques causing significant harm. Subliminal or purposefully manipulative or deceptive AI that materially distorts a person's behaviour in ways that result in or are likely to result in significant harm.

(b) Exploitation of vulnerable groups. AI systems that exploit vulnerabilities due to age, disability, or socio-economic situation, again leading to or risking significant harm.

(c) Social scoring. AI-based social scoring that leads to unjustified or disproportionate detrimental treatment in an unrelated context. For example, combining data from one context and using it in another to punish or exclude people.

(d) Crime risk prediction solely based on profiling. Assessing the likelihood of a natural person's future criminal offending solely using their personality traits or profiling, unless it is to support an existing investigation based on objective facts.

(e) Building face recognition databases via untargeted scraping. Scraping the internet or large CCTV footage sets in an indiscriminate ("vacuum cleaner") way to gather masses of facial images to create or expand face recognition databases.

(f) Emotion recognition at work or in education. Inferring or identifying emotions of people at workplaces or schools, with an exception if used strictly for medical or safety reasons.

(g) Biometric categorisation of sensitive traits. Categorising people individually by analysing their biometric data to infer sensitive traits such as race, political opinions, religion, sexual orientation, and similar. Exception: labelling lawfully acquired biometric data sets for legitimate safety or data-quality reasons.

(h) Real-time remote biometric identification for law enforcement. Real-time RBI in publicly accessible spaces is banned for police or other law enforcement unless it is strictly necessary (kidnapping victims, imminent serious threat, or certain serious crimes listed in Annex II), with prior authorisation from a judge or independent body.

Article-by-article breakdown

Articles 5(1)(a) and 5(1)(b): harmful manipulation and exploitation

Reasoning: Protects autonomy, dignity, and free will.

(a) Bans AI that covertly manipulates or deceives people (subliminal cues, deceptive chatbots, manipulative recommendation loops) and is likely to cause significant harm (mental distress, self-harm, or large financial losses).

Key conditions: (1) the technique is subliminal, manipulative, or deceptive; (2) it distorts a person's ability to make an informed decision; (3) it results in or is likely to cause significant harm.

(b) Focuses on especially vulnerable groups: children, disabled people, or those in a precarious socio-economic context. It bars exploitative AI that distorts their behaviour and leads to serious harms.

In scope vs out of scope: Out-of-scope examples include benign or moderate nudges, lawful persuasion, or manipulative techniques that do not lead to serious harm. A therapy chatbot that subliminally encourages healthy habits might be fine if it is beneficial and not likely to result in significant harm. On the other hand, the Commission highlights that if the covert influence is likely to harm users significantly (especially the vulnerable), it moves into clearly banned territory.

Article 5(1)(c): social scoring

Reasoning: Prevents mass "social credit" style systems that rank or penalise individuals beyond permissible contexts. The EU is especially wary of "general-purpose" data used across different contexts, leading to widespread surveillance or social pressure.

Practice banned: Evaluating natural persons or groups over time based on social behaviour or personal characteristics and using that score to penalise them in contexts unrelated to the data's origin, or in ways that are disproportionate.

Examples

  • Banned: A municipality collecting data on residents' library returns, volunteering, or social media behaviour and tying that to loan approvals or job opportunities.
  • Allowed: Genuine credit-scoring for financial risk using only relevant financial data, subject to consumer and data-protection law (no unscrupulous mixing with irrelevant data from social media).

The Commission underscores that "social scoring" often hints at broader social control, so the ban aims to stop the first signs of such a practice.

Article 5(1)(d): crime risk assessment based solely on profiling

Reasoning: People must be judged on actual behaviour, not solely on algorithmic predictions about future offending using personality traits.

Prohibition: Bans placing or using an AI system that only uses personal traits and profiling to predict who might commit a crime. If combined with objective evidence from an existing criminal investigation, it is not "solely" profiling and is not automatically banned. It becomes a high-risk system instead.

Out of scope: Place-based or location-based predictive policing (predicting high-crime areas) is not automatically banned, though still high-risk, as it is not about singling out an individual's predicted criminality.

In practice, the ban reflects the EU's emphasis that AI tools should not replace due legal process, nor label individuals "likely to offend" absent any specific, verifiable evidence.

Article 5(1)(e): untargeted scraping of facial images

Reasoning: Protects privacy and prevents creation of massive, indiscriminate face databases from social media or CCTV.

Ban: Building or expanding face recognition databases via indiscriminate scraping (no regard for user consent or relevance). "Untargeted" means vacuuming large volumes of images or footage to turn it into a database.

Out of scope:

  • Targeted scraping (for example, searching for a missing specific person) is not untargeted.
  • Datasets that do not serve identification (for example, training a generative model on synthetic faces) may not be covered by this ban, though other rules like GDPR can still apply.

The Commission points out that what matters is whether the system is "reasonably likely" to transform into face recognition capabilities, especially if personal data is grabbed from countless unsuspecting users.

Article 5(1)(f): emotion recognition at work and school

Reasoning: AI-based emotion detection is considered scientifically dubious, intrusive, and fosters an imbalance of power in workplaces and schools.

Ban: No placing or using AI to infer emotions in workplaces or educational institutions, except for strictly medical or safety uses. If used to check pilot fatigue or to help a special-needs child interpret emotional cues in therapy, that might be allowed if it is truly for safety or medical reasons.

Out of scope: Commercial uses of emotion recognition (for example in marketing) are not prohibited, but still classified as high-risk.

The Commission notes that emotion recognition can be misleading, encourage constant surveillance, or create undue pressure on employees and students who fear being monitored for intangible mood changes.

Article 5(1)(g): biometric categorisation by sensitive traits

Reasoning: A system that scans your face or gait and infers religion or sexual orientation can easily lead to severe discrimination or abuse.

Ban: Biometric categorisation that deduces or infers race, political opinions, sexual orientation, and similar traits on an individual basis.

Exception for mere labelling or filtering of lawfully acquired data sets (for example, to ensure balanced demographic coverage in training).

In the EU's view, such categorisation crosses ethical lines and can quickly become a tool of profiling or harassment, hence the outright ban.

Article 5(1)(h): real-time remote biometric identification (RBI) for law enforcement

Reasoning: Real-time, mass face recognition in public spaces is seen as a major threat to privacy, autonomy, and freedom of assembly.

Prohibition: Using real-time RBI in publicly accessible spaces for law enforcement is banned outright, with only three narrowly defined exceptions:

  1. Targeted search for specific victims (kidnapping, human trafficking, and similar) or missing persons.
  2. Prevention of imminent life-threatening dangers, for example a credible terrorist attack or an active shooter.
  3. Identification or localisation of suspects of certain serious crimes (listed in Annex II) punishable by four or more years.

Each individual use must receive prior authorisation by a judicial or independent authority, must be "strictly necessary", and must be limited in time, place, and scope. Mass or arbitrary real-time face matching is banned.

Implementation, enforcement, and national laws

As the AI Act's bans come into force, providers and deployers must also understand how the Commission, along with national authorities, will ensure compliance.

Market surveillance authorities

  • National authorities monitor compliance with the AI Act. They can act on their own or on complaints.
  • Prohibitions in Article 5 carry the highest penalty tier: up to 7% of worldwide annual turnover for companies.
  • Between 2 February 2025 (when bans start) and August 2025 (when enforcement provisions begin), the prohibitions are still in effect. Operators can be taken to court even if the new market surveillance authority system is not yet fully in place.

Interaction with other EU laws

  • The AI Act is designed to complement data protection (GDPR, LED) and consumer protection (UCPD, Digital Services Act, and similar). The prohibited practices are narrower in scope but more absolute.
  • Data protection rules still apply in full. Many of these banned practices would also breach data protection law or anti-discrimination law.

This means if an AI practice is not outright prohibited, it might still be disallowed or heavily restricted under data protection, anti-discrimination, or sector-specific rules.

Member State laws

  • For the real-time RBI exception (Article 5(1)(h)), each Member State that wishes to allow any of these exceptions must adopt or update its own laws. The laws must detail exactly how law enforcement can request authorisation, what serious crimes are covered, and related matters.
  • Member States may choose to disallow these exceptions altogether, imposing stricter bans on RBI.

This creates room for variations across the EU, but the Commission guidelines ensure a common minimum standard.

Fundamental Rights Impact Assessment (FRIA)

  • For any allowed exception to Article 5(1)(h), the deploying law enforcement must carry out a FRIA (Fundamental Rights Impact Assessment). This is separate from, though complementary to, standard Data Protection Impact Assessments (DPIA).
  • A FRIA systematically reviews how severely fundamental rights are impacted and what mitigations are in place, particularly for real-time biometric identification in public spaces.

The FRIA process draws explicit attention to the consequences for privacy, potential discrimination, and freedom of assembly before any real-time RBI use can begin.

Timing

  • Article 5 bans took effect on 2 February 2025.
  • Organisations had six months from the date the Act entered into force to stop or retool any system that violates Article 5.
  • The relevant official market surveillance enforcement structures began on 2 August 2025.
  • Providers and deployers should ensure compliance regardless. Courts can enforce the prohibitions, and individuals can claim violations, even before the official market surveillance mechanism is fully in place.

Key takeaways and next steps

Below are some core lessons the Commission highlights for all AI stakeholders, whether providers, deployers, or end users.

Prohibited means banned at market and use levels

If a practice is in Article 5, it cannot be placed on the EU market, put into service, or used. Providers cannot offer it, and deployers cannot adopt or run it. Some exceptions exist only for real-time RBI (Article 5(1)(h)) under extremely narrow conditions.

High-risk vs prohibited

Many AI applications are high-risk but still permissible if they follow the Act's requirements. By contrast, Article 5's list is absolutely disallowed (or partially allowed only in narrow scenarios). The Commission emphasises that meeting high-risk obligations is not enough if the system is actually part of a banned practice.

Practical advice

Vendors (providers) must do careful due diligence to ensure none of their systems can be reasonably foreseen to be used in a banned manner.

Deployers must ensure actual usage does not contravene Article 5. Even if a contract says "do not use for that purpose", deployers remain liable for how they deploy the system.

Looking ahead

  • The Commission will regularly update these guidelines if technology or case law changes.
  • Member States must pass or amend laws if they want to allow the narrow real-time RBI exceptions. They must also set up or designate authorities to grant or refuse authorisations for each use.
  • The Digital Omnibus currently in trialogue may add new prohibited practices (notably AI-generated CSAM and certain non-consensual intimate deepfakes). The final scope depends on the outcome of negotiations.

For companies, it is crucial to monitor how national laws and guidance evolve. In some countries, biometrics or high-risk uses might be curtailed even further. In others, certain exceptions for law enforcement might become possible, but always under tight controls.

Use case examples: borderline scenarios

These examples illustrate how the prohibitions play out in practice, especially when an AI system sits on the borderline of what Article 5 disallows. They come from real-life contexts where developers or deployers must stop and ask: are we nearing a prohibited practice?

1. Harmful manipulation and deception

Subliminal fitness coach. A lifestyle-coaching AI app nudges users to buy premium workout gear and supplements. It flashes pop-ups or quick messaging that some might consider subliminal.

Why it is borderline: it may materially distort decisions if the user is unaware of the subtle influence. Harm might not be obviously severe, but it could be if vulnerable users overspend or make unhealthy choices under pressure.

Extra caution: providers should review whether the tactics risk significant psychological or financial harm. Deployers should confirm the nudge methods are transparent and fully within users' conscious awareness.

2. Exploitation of vulnerabilities

Seniors' language chatbot. A freemium language tutor chatbot pushes older adults toward high-priced subscription plans. Continual "personalised offers" exploit social isolation or digital inexperience.

Why it is borderline: the AI could be leveraging age-related or socio-economic vulnerabilities. It is unclear whether the financial harm is significant or merely annoying upselling.

Extra caution: if older users end up in debt or psychologically distressed, the system risks violating the ban. Developers should ensure marketing and persuasion are ethically balanced and non-exploitative.

3. Social scoring

Community engagement points. A municipality's app assigns "civic rating" points for volunteer hours or social behaviour. These scores might influence priority for certain services, for example event tickets or small grants.

Why it is borderline: potentially uses data from multiple contexts (social media or local groups) to create a general social score. Could lead to disproportionate or unfair treatment if the city denies some benefits based on low civic rating.

Extra caution: must verify that the final usage is not unrelated to the data's source. If the points meaningfully penalise people out of proportion, it veers into prohibited social scoring.

4. Crime risk prediction solely based on profiling

Retail theft risk tool. A private security AI tries to flag potential shoplifters based on impulsivity or suspected stress level. Relies mostly on personal traits and estimated socio-economic factors.

Why it is borderline: if the AI uses only personality traits or profiling, without objective crime links, this may be "solely based on profiling". Could step into law enforcement tasks if the retailer cooperates with police.

Extra caution: providers should check if actual facts (past theft history) factor in, not just personality inferences. If no real evidence is used, the tool risks crossing Article 5(1)(d).

5. Untargeted scraping of facial images

General "face data" training set. A startup scrapes thousands of random profile photos from public posts to build an AI detection model. This database could be repurposed for face recognition in the future.

Why it is borderline: untargeted harvesting of images might be seen as building or expanding a recognition database. They claim it is for face detection, not face ID, but the line can blur.

Extra caution: if the scraping can foreseeably become a face recognition product, it may violate the ban. Ensure technical and contractual measures limit usage to detection-only (with user consent, if relevant).

6. Emotion recognition at work or school

Workplace stress monitor. An AI system for shift scheduling that interprets workers' micro-expressions and vocal tone to detect negativity. The employer uses the results in performance assessments.

Why it is borderline: this might go beyond mere fatigue detection and effectively do emotion recognition, which is banned in workplaces unless for strict safety or medical ends. The employer's reasons are partly productivity or HR decisions, not pure safety.

Extra caution: if it is indeed inferring anger, frustration, or sadness, the system likely violates Article 5(1)(f). The bar for safety reasons is narrow. This borderline scenario calls for a thorough legal check.

7. Biometric categorisation of sensitive traits

Mall demographic camera. A camera system labels passers-by by age bracket, hair colour, approximate ethnicity, or religious clothing for marketing metrics. The data is stored at the individual level and can identify repeated visits or patterns.

Why it is borderline: inferring religion or ethnicity from clothing or facial structure enters sensitive trait territory. The borderline question is whether it is purely labelling data sets or actually categorising real individuals by race or religion.

Extra caution: if the system lumps individuals by any protected characteristic, it is likely illegal. Distinguish legitimate marketing segmentation (age range) from suspect sensitive trait categorisation.

8. Real-time remote biometric identification (RBI) in public

Citywide watchlist cameras. Police in a busy downtown area run real-time face scans on passers-by to see if they match any known watchlist. The watchlist includes everything from petty theft suspects to missing people.

Why it is borderline: Article 5(1)(h) exceptions are strictly for specific serious crimes, imminent threats, or finding specific missing persons. Using it as a broad indefinite search (covering lesser offences) is likely prohibited.

Extra caution: the police or vendor must check if the watchlist focuses on allowed exceptions (terrorism, kidnapping, and similar). Sweeping usage is disallowed. Any broader approach triggers the ban.

Final thoughts

These scenarios show how an AI application can drift close to an outright ban if it crosses certain lines, whether exploiting vulnerable users, scraping facial images en masse, or imposing hidden manipulations. The European Commission's guidelines continually emphasise proportionality (is the measure out of scale or used in the wrong context?), necessity (could less invasive alternatives work?), and respect for fundamental rights (dignity, privacy, data protection, and similar).

For AI developers, risk managers, and compliance teams, these distinctions are vital. Even if your application does not immediately appear unacceptable, it can enter a grey zone if you expand functionalities (for example, turning face detection into full face recognition) or apply an AI tool in an unintended domain. Keeping track of these lines, and ensuring your product or service stays clear of them, is essential as the EU AI Act's prohibitions become fully enforceable and as the Digital Omnibus potentially expands the list.

The Modulos AI governance platform is designed to help you manage your use of AI, whether as a provider or as a deployer, and navigate the risks from them.