GPAI & General Provisions
2 August 2026
---
DAYS
-
MONTHS
High-Risk AI Systems
2 December 2027
Proposed date
---
DAYS
-
MONTHS

Get Ready for
the EU AI Act Omnibus

The EU AI Act entered into force in 2024. Now the Omnibus amendments are changing the rules. New deadlines, new prohibited practices, and simplified requirements for high-risk AI. Stay ahead of what's coming.

Timeline and Compliance Milestones

The EU AI Act entered into force on 1 August 2024. Since then, prohibited AI practices and AI literacy requirements became enforceable in February 2025, and GPAI providers had to comply with transparency obligations by August 2025.

The Digital Omnibus proposal, now in trilogue, is reshaping what comes next. High-risk AI deadlines are shifting from August 2026 to December 2027 (Annex III) and August 2028 (Annex I). The timeline below reflects both confirmed milestones and the proposed changes.

August 2024

The Act officially enters into force

You are here
2
February 2025

Prohibitions on unacceptable risk and AI literacy requirements enter into force

3
August 2025

Obligations for GPAI providers, notifications to authorities, and fines go into effect

4
February 2026

Commission implementing act on post-market monitoring

5
August 2026

General application date: GPAI enforcement, codes of practice, and remaining provisions take effect

Proposed Omnibus Timeline
6
December 2027

Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement (Annex III)

7
August 2028

Obligations for high-risk AI systems as safety components in regulated products (Annex I)

8
By End of 2030

Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice

What the Omnibus Changes

The Digital Omnibus is not just a delay. It reshapes timelines, tightens some rules, and simplifies others. Here is what is moving, what is staying, and what you should do about it.

Main high-risk AI rules (Annex III)

Current law

Under the enacted AI Act, the main high-risk obligations still apply from 2 August 2026.

Likely Omnibus outcome

The likely landing zone is 2 December 2027. The Commission, Council and Parliament all support a delay, even if they differ on the mechanics.

What to do now

Keep inventory, classification, control design and evidence planning moving. Do not freeze readiness work waiting for trilogue.

Product-integrated high-risk AI (Annex I)

Current law

Under the enacted AI Act, these obligations still apply from 2 August 2027.

Likely Omnibus outcome

The likely landing zone is 2 August 2028 for AI embedded in regulated products.

What to do now

Align product, legal, engineering and quality teams early if AI is embedded in medical, machinery, radio, transport or other regulated products.

AI literacy

Current law

Since 2 February 2025, providers and deployers must ensure a sufficient level of AI literacy for staff and other persons handling AI systems on their behalf.

Likely Omnibus outcome

The rule is likely to become more proportionate and more clearly framed, but not disappear as a practical expectation.

What to do now

Train staff anyway. It reduces operational risk, improves governance maturity and remains a strong signal to customers and regulators.

Sensitive data for bias testing

Current law

Use of special-category personal data remains tightly constrained under GDPR and the current AI Act framework.

Likely Omnibus outcome

The Omnibus is likely to expand room for bias detection and mitigation with safeguards, but the exact threshold and wording remain contested.

What to do now

Prepare governance, safeguards, legal analysis and documentation before assuming this route will be available in practice.

Registration and documentation

Current law

Annex III self-assessment, technical documentation and related record-keeping obligations still flow from the enacted Act.

Likely Omnibus outcome

The direction is simplification, not disappearance. Council and Parliament both point toward a lighter, more targeted regime rather than a full deletion.

What to do now

Structure documentation now so it can be streamlined later instead of rebuilt under deadline pressure.

Prohibited practices

Current law

The Article 5 prohibitions have applied since 2 February 2025.

Likely Omnibus outcome

The Omnibus is not just a delay package. Council has proposed adding clearer restrictions around systems capable of generating non-consensual intimate content and child sexual abuse material.

What to do now

Do not treat the Omnibus as a reason to pause all AI Act work. Some obligations are already live, and some prohibitions may tighten further.

How Compliance Actually Works

The EU AI Act doesn't sort AI systems into tidy risk tiers. It runs four independent checks, and the obligations stack. A single AI system can trigger multiple gates simultaneously.

Most guides get this wrong. Here's how compliance actually works.

1
GATE 1 · Article 5

Prohibited Practices

Does this AI practice cross a red line?

2
GATE 2 · Annex III

High-Risk Systems

Is this AI used in a high-stakes domain?

3
GATE 3 · Article 50

Transparency

Does this AI interact with people, detect emotions, or generate synthetic media?

4
GATE 4 · Chapter V

General-Purpose AI

Are you providing a foundation model or GPAI?

Obligations stack: one system can trigger multiple gates

Examples

Credit Scoring Chatbot
1
2
3
4

High-risk (essential services) + Transparency (human interaction)

Customer Service Bot
1
2
3
4

Transparency only: disclose it's AI

Medical Triage LLM
1
2
3
4

All three: High-risk + Transparency + GPAI obligations

The EU AI Act Covers More Than You Think

You thought you had 3 AI systems. You probably have 50. The Act's definition is broad, and most of your AI is hiding below the surface.

What everyone pictures
Large Language Models
Image Generators
Code Assistants
Autonomous Vehicles
↑ Visible
Hidden ↓
High-Risk (Gate 2)
Transparency (Gate 3)
Minimal Risk

The average enterprise has 10x more AI systems
than they assume.

Most haven't been inventoried.

EU AI Act Definition (Article 3)

An AI system is a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The EU AI Act Follows Your AI

Like GDPR, the EU AI Act is extraterritorial. It applies based on who you affect, not where you're headquartered.

Loading map...

The Chain of Responsibility

A real-world example of how the EU AI Act reaches across borders

Company A
Chile
Provider

Builds AI credit scoring model

Covered by EU AI Act

System placed on EU market through value chain

Company B
United States
Deployer

Licenses model for fintech platform

Covered by EU AI Act

Deploying high-risk AI affecting EU persons

EU Customers
European Union
Affected Persons

Credit decisions made about them

Protected by the EU AI Act

Compliance Requirements

The Act lays out a range of requirements for high-risk AI systems, covering:

Risk Management SystemArticle 9
Data and Data GovernanceArticle 10
Technical DocumentationArticle 11
Record KeepingArticle 12
Transparency and provision of information to userArticle 13
Human OversightArticle 14
Accuracy, Robustness and CybersecurityArticle 15
Quality Management SystemArticle 17
Fundamental Rights Impact Assessment*Article 27

* Required only for public sector deployers and private deployers using high-risk AI for credit scoring or life/health insurance risk assessment.

How Modulos Helps You Meet Every Requirement

The Modulos AI Governance Platform addresses each EU AI Act obligation with purpose-built tools.

Risk Management
Quantitative risk assessment with Monte Carlo simulation
Documentation & Records
AI Agents auto-generate and find evidence in your repos
Human Oversight & QMS
Built-in review workflows with full audit trail
Multi-Framework Compliance
140+ controls mapped to EU AI Act, ISO 42001, NIST AI RMF

Conformity Assessments

High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This structured process ensures your AI systems meet regulatory requirements.

Step 1 - A high-risk AI system is developed

Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.

Step 2 - The system undergoes the conformity assessment and complies with AI requirements

- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.

- Maintain up-to-date technical documentation in a clear and comprehensive manner.

Step 3 - Registration of stand-alone systems in an EU database.

- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.

- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.

Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking

- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

- Ensure proper human oversight during the period the system is in use.

CE Mark

The system can be placed on the market.

Once substantial changes happen in the AI system's lifecycle, repeat from Step 2.

System placed on market

Disclaimer: The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.

Understanding Roles and Responsibilities

The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle:

Providers

Role: Develop and market AI systems
Responsibilities: Maintain technical documentation, ensure compliance with the Act, and provide transparency information.

Deployers

Role: Use AI systems within their operations.
Responsibilities: Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.

Importers

Role: Market AI systems from third countries.
Responsibilities: Verify compliance, provide necessary documentation, and cooperate with authorities.

Distributors

Role: Make AI Systems available in the market
Responsibilities: Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.

Modifying AI Systems

Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.

Penalties for Non-Compliance

The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company's global annual turnover or a predetermined amount, whichever is higher. The Omnibus proposal extends proportionate penalty caps beyond SMEs and start-ups to Small Mid-Cap companies (up to 750 employees or €150M turnover), with simplified documentation and quality management obligations.

Ensure your AI systems comply with the EU AI Act to avoid these penalties.

Request a Demo

Penalty Breakdown

Non-compliance with prohibitions

Up to
€35M
or 7% of turnover

Supplying incorrect, incomplete, or misleading information

Up to
€7.5M
or 1.5% of turnover

Non-compliance with other obligations

Up to
€15M
or 3% of turnover

Download the EU AI Act Guide

Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.

Download the Guide
Modulos

EU AI Act Guide

Foundations and
Practical Insights

FAQ about EU AI Act

The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and deployed. It aims to protect fundamental rights, ensure safety, and foster innovation while creating a harmonized legal framework across the EU.

The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, the Act also applies to providers and deployers outside the EU whose AI systems are used in the EU market. This means organizations worldwide may need to comply if their AI products or services reach EU users.

The situation is similar to the global reach of General Data Protection Regulation (GDPR). The AI Act applies to providers outside the EU when their AI system output is used in the EU. Non-EU deployers using AI systems in the EU are also covered. This extraterritorial scope means companies worldwide must assess their AI offerings for EU compliance.

The EU AI Act entered into force on 1 August 2024. Prohibitions on unacceptable risk took effect in February 2025, and GPAI obligations in August 2025. The general application date is August 2026. The Digital Omnibus proposal, currently in trilogue, would push high-risk system deadlines to December 2027 (Annex III standalone systems) and August 2028 (Annex I embedded products). These dates are proposed and not yet final.

To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the regulation. Key steps include: conducting an AI systems inventory, classifying systems by risk level, implementing required documentation and risk management systems, ensuring data governance practices, and establishing human oversight mechanisms.

According to the EU AI Act, significant modifications to an AI system can change your role from a deployer to a provider, triggering additional compliance obligations. Key modifications that may reclassify you include: • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI system. • Re-training with New Data: Using new datasets for training that substantially alter the system's performance or behavior. • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components. Implications of becoming a provider include increased responsibilities such as complying with all provider obligations under the Act, including conformity assessments, documentation requirements, and ongoing monitoring obligations.

The Digital Omnibus on AI is a legislative proposal published by the European Commission on 19 November 2025 to amend the EU AI Act (Regulation 2024/1689). It delays high-risk AI deadlines, extends simplified compliance to companies with up to 750 employees, lets sectoral product regulations take precedence over separate AI Act conformity assessments, and broadens the use of sensitive data for bias testing. The Council and Parliament adopted their negotiating positions in March 2026. Trilogue negotiations are expected to conclude by mid-2026.

It depends on your AI system. Prohibited AI practices and AI literacy requirements are already enforceable since February 2025, and GPAI obligations since August 2025. The Omnibus does not change these. For high-risk AI systems, both the Council and Parliament agree on fixed new dates: December 2027 for standalone systems listed in Annex III (biometrics, critical infrastructure, law enforcement) and August 2028 for AI embedded in regulated products under Annex I (medical devices, machinery, vehicles). These dates are proposed and subject to trilogue, but the direction is clear. Do not pause compliance work.

The existing Article 5 prohibitions have applied since February 2025. The Omnibus proposes adding a new ban on AI systems that generate non-consensual intimate imagery of real persons and child sexual abuse material. Both the Council and Parliament support this addition, making it very likely to survive trilogue. This means the Omnibus is not only delaying obligations but also tightening rules in specific areas.

Ensure Your AI Compliance

Whether you are already using or considering AI in your business, keeping these upcoming regulatory changes in mind is essential. Modulos can support your compliance journey.