EU AI Act risk categories explained

The EU AI Act risk categories problem
Search for EU AI Act risk categories and you will find the same pyramid everywhere: unacceptable, high, limited, minimal. Four tiers. Neat hierarchy. One system, one category.
That model is wrong, and not just oversimplified. The pyramid is structurally misleading in ways that cause real compliance failures.
The EU AI Act does not sort AI systems into mutually exclusive risk tiers. It runs four independent compliance checks, and the obligations stack. A single AI system can trigger multiple checks at once. Getting this distinction right is the difference between actual compliance and checking the wrong boxes.
The pyramid problem
The four-tier risk pyramid appears in Commission communications, consulting decks, and nearly every explainer article. Look at the actual legislative text and the term "limited risk" does not appear as a risk classification category. Article 50's transparency requirements function as a parallel track that applies across risk levels, not as a separate tier.
A credit-scoring chatbot is both high-risk (essential services under Annex III) and subject to transparency obligations (human interaction under Article 50). The obligations stack. The pyramid model would have you pick one.
This matters because compliance planning based on the pyramid will miss obligations. If you think transparency requirements only apply to "limited risk" systems, you will overlook the disclosure requirements that also attach to your high-risk systems.
How compliance actually works: four gates
Think of the EU AI Act risk categories as four independent gates rather than tiers.
Gate 1: Prohibited practices (Article 5)
Question: Does this AI practice cross a fundamental rights red line?
Consequence: Banned. Full stop.
Eight categories of AI practices are prohibited entirely:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorisation inferring sensitive characteristics (race, political opinions, sexual orientation, religious beliefs)
- Untargeted scraping for facial recognition databases
- AI exploiting vulnerabilities of specific groups (age, disability, social or economic situation)
- AI designed to manipulate behaviour causing significant harm
- AI assessing the risk of criminal offending based solely on profiling
If your system falls here, no compliance pathway exists. Redesign or discontinue.
Gate 2: High-risk systems (Article 6 plus Annexes I and III)
Question: Is this AI used in a high-stakes domain or as a safety component?
Consequence: Full compliance regime. Conformity assessment, technical documentation, risk management, human oversight, EU database registration, and post-market monitoring.
Two pathways trigger high-risk classification:
Pathway A (Annex I): AI serving as a safety component of products covered by EU harmonisation legislation requiring third-party conformity assessment. Medical devices, machinery, toys, lifts, radio equipment, vehicles, aircraft. These integrate with existing sectoral product safety frameworks.
Pathway B (Annex III): Standalone AI systems in eight high-stakes domains:
- Biometrics (remote identification, categorisation, emotion recognition)
- Critical infrastructure (road traffic, utilities, digital infrastructure)
- Education (admissions, assessment, proctoring)
- Employment (recruitment, performance evaluation, task allocation)
- Essential services (credit scoring, insurance risk assessment, emergency dispatch)
- Law enforcement (evidence evaluation, recidivism prediction, profiling)
- Migration and border control (risk assessment, application examination)
- Administration of justice (legal research assistance, voter influence)
Annex III systems can claim exemption under Article 6(3) if they do not materially influence decision outcomes. That covers narrow procedural tasks, improvements to prior human work, pattern detection without replacement, and preparatory tasks only. Any system performing profiling is always high-risk regardless of exemptions.
Gate 3: Transparency requirements (Article 50)
Question: Does this AI interact with people, detect emotions, or generate synthetic content?
Consequence: Disclosure and labelling obligations.
Transparency requirements under Article 50 operate as a parallel track rather than a risk tier. They apply regardless of whether Gate 2 triggered:
- AI systems interacting directly with humans must disclose that they are AI (unless obvious from context)
- Emotion recognition and biometric categorisation systems must inform subjects
- Synthetic audio, image, video, or text must be machine-readable as AI-generated
- Deepfakes must be disclosed (with exceptions for creative and satirical work)
A high-risk HR screening system that uses a chatbot interface triggers both Gate 2 (high-risk) and Gate 3 (transparency). A simple customer service chatbot might only trigger Gate 3. The gates are independent.
Gate 4: General-purpose AI (Chapter V)
Question: Are you providing a foundation model or general-purpose AI system?
Consequence: Model-level obligations including documentation and copyright compliance. Systemic risk models face additional requirements for evaluation, incident reporting, and cybersecurity.
GPAI obligations attach to the model provider, not the downstream deployer. If you deploy GPT-4 in a high-risk application, OpenAI has GPAI obligations and you have high-risk deployer obligations. The two tracks run in parallel.
GPAI models with systemic risk (currently defined as >10²⁵ FLOPs training compute) face additional requirements: adversarial testing, incident tracking, and adequate cybersecurity.
How gates stack: worked examples
This is where the pyramid model breaks down completely.
Credit scoring chatbot
- Gate 1: Not prohibited
- Gate 2: High-risk (creditworthiness assessment under Annex III)
- Gate 3: Transparency required (human interaction)
- Gate 4: Depends on underlying model
Obligations from Gates 2 and 3 both apply. The pyramid would force you to pick "high risk" or "limited risk" and miss that transparency obligations also attach.
Customer service bot
- Gate 1: Not prohibited
- Gate 2: Not high-risk
- Gate 3: Transparency required (human interaction)
- Gate 4: Depends on underlying model
Only Gate 3 triggers. The pyramid misleadingly calls this "limited risk", but transparency is a disclosure requirement, not a risk classification.
Medical triage LLM
- Gate 1: Not prohibited
- Gate 2: High-risk (emergency dispatch under Annex III)
- Gate 3: Transparency required (human interaction, possibly synthetic content)
- Gate 4: GPAI obligations apply to model provider
Three gates trigger at once. The deployer holds high-risk and transparency obligations; the model provider holds GPAI obligations. The pyramid cannot represent this structure.
Spam filter
- Gate 1: Not prohibited
- Gate 2: Not high-risk
- Gate 3: No direct human interaction requiring disclosure
- Gate 4: Not GPAI
No gates trigger. This is genuinely minimal-risk, but the classification follows from not triggering any of the independent compliance checks, not from placement in a designated tier.
Annex III high-risk domains: the deep dive
Gate 2 deserves detailed treatment because most enterprise compliance work concentrates here. Annex III defines eight domains where AI systems are presumptively high-risk. Reading the actual regulatory language together with the recitals helps distinguish genuinely covered systems from those that merely seem adjacent.
1. Biometrics (Recitals 54, 159)
Three sub-categories trigger high-risk classification:
(a) Remote biometric identification systems, meaning 1:n matching against databases of enrolled individuals. The Act explicitly excludes verification-only systems (1:1 matching to confirm a claimed identity) from this category.
(b) Biometric categorisation that infers sensitive or protected attributes. The key word is "infer". Systems that deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation from biometric data fall here.
(c) Emotion recognition systems in workplaces and educational institutions.
Practical distinction: A facial recognition system confirming you are who your badge says you are (verification) is not high-risk under this domain. A system scanning a crowd to identify individuals against a watchlist (identification) is.
2. Critical infrastructure (Recital 55)
Covers AI systems used as safety components in managing critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity. Recital 55 explains the rationale: failure or malfunctioning in these contexts may risk life or health at large scale, or cause major disruption to social and economic activities.
What counts as a "safety component": Systems that protect physical integrity of infrastructure or users, even if not strictly necessary for the infrastructure to function. The recital gives concrete examples: AI monitoring water pressure in distribution systems, or AI controlling fire alarms in cloud computing data centres.
What does not: Operational optimisation systems that improve efficiency but do not serve a safety-critical function.
3. Education and vocational training (Recital 56)
Four sub-categories:
(a) Systems determining access to, admission to, or assignment to educational institutions at all levels
(b) Systems evaluating learning outcomes, including those steering the learning process itself
(c) Systems assessing the appropriate level of education an individual will receive or be able to access
(d) Systems monitoring or detecting prohibited student behaviour during examinations
Recital 56 makes the stakes explicit: these systems determine educational and professional course of life, affecting the ability to secure a livelihood. The recital specifically warns about perpetuating historical discrimination patterns affecting women, certain age groups, persons with disabilities, and persons of particular racial or ethnic origins or sexual orientation.
Coverage is broad: Adaptive learning platforms that steer curriculum paths, automated essay grading that affects progression, AI proctoring that flags "suspicious" behaviour. All fall within scope if they materially influence outcomes.
4. Employment, worker management, and access to self-employment (Recital 57)
Two sub-categories with extensive reach:
(a) Recruitment and selection: targeted job advertisements, CV and application filtering, and candidate evaluation in interviews or tests
(b) Workplace decisions: systems affecting terms of work-related relationships, promotion and termination decisions, task allocation based on individual behaviour or personal traits, and performance or behaviour monitoring
Recital 57 echoes the discrimination concerns from education (historical patterns disadvantaging women, certain age groups, persons with disabilities, or persons of particular racial or ethnic origins or sexual orientation) and adds a distinct concern: undermining fundamental rights to data protection and privacy through workplace surveillance.
Coverage includes: Automated résumé screening, AI interview analysis, productivity monitoring software that influences performance reviews, algorithmic task assignment in gig work platforms.
5. Access to essential private services and public services and benefits (Recital 58)
Four sub-categories covering situations where individuals are often in vulnerable positions:
(a) Eligibility evaluation for public assistance benefits and services, including healthcare, and systems used to grant, reduce, revoke, or reclaim such benefits
(b) Creditworthiness evaluation and credit scoring, with an explicit carve-out for fraud detection
(c) Risk assessment and pricing for life and health insurance
(d) Evaluation and classification of emergency calls, including dispatch and priority-setting for emergency first response services (police, firefighters, medical aid) and emergency healthcare patient triage
Recital 58 explains: these systems can directly impact individuals' livelihood and may infringe rights to social protection, non-discrimination, human dignity, and effective remedy. For essential services, they determine access to housing, electricity, telecommunications, and other necessities. For emergency services, they are genuinely critical for life, health, and property.
The credit scoring carve-out matters: Fraud detection systems are not high-risk under this domain, but systems evaluating whether to extend credit are.
6. Law enforcement (Recital 59)
Covers AI systems used by or on behalf of law enforcement authorities, with five sub-categories:
(a) Individual risk assessment for natural persons: evaluating likelihood of offending or reoffending, or likelihood of becoming a victim
(b) Deception detection: polygraph-adjacent systems and similar lie-detector technologies
(c) Evaluation of reliability of evidence in criminal investigations or prosecutions
(d) Assessment of risk of a natural person offending or reoffending, not only on the basis of profiling but also on personality traits or past criminal behaviour
(e) Profiling during detection, investigation, or prosecution of criminal offences
The concerns here focus on accuracy, non-discrimination, and due process rights given law enforcement's coercive power.
7. Migration, asylum, and border control management (Recital 60)
Four sub-categories for systems used by competent public authorities or on their behalf:
(a) Deception detection in the migration context: polygraph-adjacent systems used during visa applications, asylum interviews, or border examinations
(b) Assessment of irregular migration risk, including security, health, or irregular migration risks
(c) Examination of applications for asylum, visa, or residence permits, and associated complaints
(d) Identification of natural persons in migration contexts, with an explicit carve-out for travel document verification
Recital 60 notes that persons in migration situations are in particularly vulnerable positions, and these systems may affect their fundamental rights regarding asylum, free movement, and non-refoulement.
8. Administration of justice and democratic processes (Recitals 61 to 62)
Two sub-categories with distinct rationales:
(a) Systems intended to assist judicial authorities in researching and interpreting facts and law and applying the law to concrete facts. Call it AI legal research on steroids if it influences judicial reasoning.
(b) Systems intended to influence the outcome of an election or referendum, or the voting behaviour of natural persons exercising their vote
Important exclusions for category (b): Systems whose exposure to natural persons is only indirect. Campaign logistics tools, accessibility features, and similar support functions that do not directly engage voters with persuasive content.
Recital 62 specifically calls out the threat to democratic processes and fundamental rights of free expression, assembly, and non-discrimination when AI systems directly target voter behaviour.
The exemption mechanism (Article 6(3))
Annex III systems can escape high-risk classification if they do not materially influence decision outcomes. Four conditions can establish this (any one suffices):
Narrow procedural task: Data conversion, document classification, and duplicate detection are examples of routine functions with minimal decision impact.
Improving prior human work: Enhancing already-completed human output through language improvement, tone adjustment, or formatting.
Pattern detection without replacement: Flagging anomalies or deviations for human review without replacing or influencing the original assessment.
Preparatory task: File indexing, translation, searching, and data linking that have no direct impact on substantive decisions.
Critical override: Systems performing profiling (automated processing evaluating personal aspects such as work performance, economic situation, health, preferences, behaviour, or location) are always high-risk regardless of exemptions.
Claiming exemption requires documentation before market placement, registration in the EU database, and readiness to provide documentation to authorities on request.
Practical compliance framework
For each AI system in your portfolio:
Check 1: Does Gate 1 prohibit it? Review Article 5 prohibited practices. If yes, discontinue or fundamentally redesign.
Check 2: Does Gate 2 classify it as high-risk?
- Is it a safety component in Annex I products requiring third-party conformity assessment?
- Does its intended use match an Annex III category?
- If yes to either, presumptively high-risk
- If Annex III: does Article 6(3) exemption apply and is no profiling involved?
- Document the assessment either way
Check 3: Does Gate 3 require transparency?
- Does it interact directly with humans?
- Does it detect emotions or categorise biometrically?
- Does it generate synthetic content?
- If yes to any, transparency obligations apply regardless of Gate 2 outcome
Check 4: Does Gate 4 apply GPAI obligations?
- Are you the provider of a foundation model or GPAI system?
- If yes, GPAI documentation and transparency requirements
- If systemic risk (>10²⁵ FLOPs), additional evaluation and incident reporting
Compile total obligations: Sum all triggered gates. A single system may require high-risk conformity assessment, transparency disclosures, and, if you are the model provider, GPAI documentation.
Why this matters
The pyramid model causes three compliance failures:
Missing stacked obligations. Organisations classify a system as "high-risk" and forget that transparency requirements also apply when it interacts with humans.
False comfort from "limited risk". Teams think a chatbot is "only limited risk" without recognising that if the chatbot does credit pre-screening, the system is also high-risk.
Wrong mental model for GPAI. The pyramid has no place for GPAI obligations, which run on a completely separate track from use-case risk classification.
The gates model matches the actual legislative structure and reflects how the law works in practice. Compliance planning should start here.
Gates vs pyramid: what the wrong model misses
| Scenario | Pyramid model says | Gates model says | What you miss |
|---|---|---|---|
| Credit-scoring chatbot | Pick one: "high risk" or "limited risk" | High-risk (Gate 2) and transparency (Gate 3) | Transparency obligations |
| HR screening with LLM backend | "High risk" | High-risk (Gate 2) and transparency (Gate 3) and GPAI for model provider (Gate 4) | GPAI provider obligations |
| Customer service bot | "Limited risk" | Transparency only (Gate 3) | Nothing, but wrong classification rationale |
| Emotion recognition at work | "High risk" | Prohibited (Gate 1) | You cannot deploy this system at all |
| Medical device AI | "High risk" | High-risk via Annex I (Gate 2) with different timeline than Annex III | August 2027 deadline, not August 2026 |
How to classify an AI system under the EU AI Act
For organisations asking "is my AI system high-risk under Annex III?", the answer requires running through all four gates sequentially.
Step 1: Screen for prohibited practices (Gate 1). Review Article 5. Social scoring, workplace emotion recognition, and real-time biometric identification in public spaces are banned outright with narrow exceptions. If your system falls here, no compliance pathway exists.
Step 2: Check Annex III high-risk use cases (Gate 2). Does the intended use match one of the eight Annex III domains? Biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice and democratic processes. If yes, the system is presumptively high-risk unless Article 6(3) exemptions apply and no profiling is involved.
Step 3: Check Annex I safety components (Gate 2, alternate pathway). Is the AI a safety component in products covered by EU harmonisation legislation requiring third-party conformity assessment? Medical devices, machinery, toys, vehicles, aircraft. Different timeline applies (August 2027 for most).
Step 4: Assess transparency requirements (Gate 3). Does the system interact directly with humans, detect emotions, categorise biometrically, or generate synthetic content? Transparency obligations apply regardless of high-risk status.
Step 5: Determine GPAI applicability (Gate 4). Are you the provider of a foundation model or general-purpose AI system? Model-level documentation and transparency requirements apply. Systemic risk models (>10²⁵ FLOPs) face additional evaluation and incident reporting obligations.
Step 6: Compile total obligations. Sum all triggered gates. Document the assessment. Register in the EU database if claiming Annex III exemption.
Modulos helps organisations navigate EU AI Act compliance across all four gates. The Modulos AI governance platform provides systematic classification, documentation management, and obligation tracking for enterprises operating AI systems in Europe. For broader context, see the guide to AI governance.
Ready to Transform Your AI Governance?
Discover how Modulos can help your organization build compliant and trustworthy AI systems.

