Get Ready for
the EU AI Act Omnibus
The EU AI Act entered into force in 2024. Now the Omnibus amendments are changing the rules. New deadlines, new prohibited practices, and simplified requirements for high-risk AI. Stay ahead of what's coming.
Timeline and Compliance Milestones
The EU AI Act entered into force on 1 August 2024. Since then, prohibited AI practices and AI literacy requirements became enforceable in February 2025, and GPAI providers had to comply with transparency obligations by August 2025.
The Digital Omnibus proposal, now in trilogue, is reshaping what comes next. High-risk AI deadlines are shifting from August 2026 to December 2027 (Annex III) and August 2028 (Annex I). The timeline below reflects both confirmed milestones and the proposed changes.
The Act officially enters into force
Prohibitions on unacceptable risk and AI literacy requirements enter into force
Obligations for GPAI providers, notifications to authorities, and fines go into effect
Commission implementing act on post-market monitoring
General application date: GPAI enforcement, codes of practice, and remaining provisions take effect
Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement (Annex III)
Obligations for high-risk AI systems as safety components in regulated products (Annex I)
Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice
The Act officially enters into force
Prohibitions on unacceptable risk and AI literacy requirements enter into force
Obligations for GPAI providers, notifications to authorities, and fines go into effect
Commission implementing act on post-market monitoring
General application date: GPAI enforcement, codes of practice, and remaining provisions take effect
Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement (Annex III)
Obligations for high-risk AI systems as safety components in regulated products (Annex I)
Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice
The Act officially enters into force
Prohibitions on unacceptable risk and AI literacy requirements enter into force
Obligations for GPAI providers, notifications to authorities, and fines go into effect
Commission implementing act on post-market monitoring
General application date: GPAI enforcement, codes of practice, and remaining provisions take effect
Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement (Annex III)
Obligations for high-risk AI systems as safety components in regulated products (Annex I)
Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice
What the Omnibus Changes
The Digital Omnibus is not just a delay. It reshapes timelines, tightens some rules, and simplifies others. Here is what is moving, what is staying, and what you should do about it.
Main high-risk AI rules (Annex III)
Under the enacted AI Act, the main high-risk obligations still apply from 2 August 2026.
The likely landing zone is 2 December 2027. The Commission, Council and Parliament all support a delay, even if they differ on the mechanics.
Keep inventory, classification, control design and evidence planning moving. Do not freeze readiness work waiting for trilogue.
Product-integrated high-risk AI (Annex I)
Under the enacted AI Act, these obligations still apply from 2 August 2027.
The likely landing zone is 2 August 2028 for AI embedded in regulated products.
Align product, legal, engineering and quality teams early if AI is embedded in medical, machinery, radio, transport or other regulated products.
AI literacy
Since 2 February 2025, providers and deployers must ensure a sufficient level of AI literacy for staff and other persons handling AI systems on their behalf.
The rule is likely to become more proportionate and more clearly framed, but not disappear as a practical expectation.
Train staff anyway. It reduces operational risk, improves governance maturity and remains a strong signal to customers and regulators.
Sensitive data for bias testing
Use of special-category personal data remains tightly constrained under GDPR and the current AI Act framework.
The Omnibus is likely to expand room for bias detection and mitigation with safeguards, but the exact threshold and wording remain contested.
Prepare governance, safeguards, legal analysis and documentation before assuming this route will be available in practice.
Registration and documentation
Annex III self-assessment, technical documentation and related record-keeping obligations still flow from the enacted Act.
The direction is simplification, not disappearance. Council and Parliament both point toward a lighter, more targeted regime rather than a full deletion.
Structure documentation now so it can be streamlined later instead of rebuilt under deadline pressure.
Prohibited practices
The Article 5 prohibitions have applied since 2 February 2025.
The Omnibus is not just a delay package. Council has proposed adding clearer restrictions around systems capable of generating non-consensual intimate content and child sexual abuse material.
Do not treat the Omnibus as a reason to pause all AI Act work. Some obligations are already live, and some prohibitions may tighten further.
How Compliance Actually Works
The EU AI Act doesn't sort AI systems into tidy risk tiers. It runs four independent checks, and the obligations stack. A single AI system can trigger multiple gates simultaneously.
Most guides get this wrong. Here's how compliance actually works.
Prohibited Practices
Does this AI practice cross a red line?
High-Risk Systems
Is this AI used in a high-stakes domain?
Transparency
Does this AI interact with people, detect emotions, or generate synthetic media?
General-Purpose AI
Are you providing a foundation model or GPAI?
Obligations stack: one system can trigger multiple gates
Examples
High-risk (essential services) + Transparency (human interaction)
Transparency only: disclose it's AI
All three: High-risk + Transparency + GPAI obligations
The EU AI Act Covers More Than You Think
You thought you had 3 AI systems. You probably have 50. The Act's definition is broad, and most of your AI is hiding below the surface.
The average enterprise has 10x more AI systems
than they assume.
Most haven't been inventoried.
EU AI Act Definition (Article 3)
An AI system is a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The EU AI Act Follows Your AI
Like GDPR, the EU AI Act is extraterritorial. It applies based on who you affect, not where you're headquartered.
The Chain of Responsibility
A real-world example of how the EU AI Act reaches across borders
Builds AI credit scoring model
System placed on EU market through value chain
Licenses model for fintech platform
Deploying high-risk AI affecting EU persons
Credit decisions made about them
Protected by the EU AI Act
Compliance Requirements
The Act lays out a range of requirements for high-risk AI systems, covering:
* Required only for public sector deployers and private deployers using high-risk AI for credit scoring or life/health insurance risk assessment.
How Modulos Helps You Meet Every Requirement
The Modulos AI Governance Platform addresses each EU AI Act obligation with purpose-built tools.
Conformity Assessments
High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This structured process ensures your AI systems meet regulatory requirements.
Step 1 - A high-risk AI system is developed
Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.
Step 2 - The system undergoes the conformity assessment and complies with AI requirements
- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.
- Maintain up-to-date technical documentation in a clear and comprehensive manner.
Step 3 - Registration of stand-alone systems in an EU database.
- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.
- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.
Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking
- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.
- Ensure proper human oversight during the period the system is in use.
The system can be placed on the market.
Once substantial changes happen in the AI system's lifecycle, repeat from Step 2.
Disclaimer: The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.
Understanding Roles and Responsibilities
The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle:
Providers
Deployers
Importers
Distributors
Modifying AI Systems
Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.
Penalties for Non-Compliance
The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company's global annual turnover or a predetermined amount, whichever is higher. The Omnibus proposal extends proportionate penalty caps beyond SMEs and start-ups to Small Mid-Cap companies (up to 750 employees or €150M turnover), with simplified documentation and quality management obligations.
Ensure your AI systems comply with the EU AI Act to avoid these penalties.
Request a DemoPenalty Breakdown
Non-compliance with prohibitions
Supplying incorrect, incomplete, or misleading information
Non-compliance with other obligations
Download the EU AI Act Guide
Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.
Download the GuideEU AI Act Guide
Foundations and
Practical Insights
FAQ about EU AI Act
The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and deployed. It aims to protect fundamental rights, ensure safety, and foster innovation while creating a harmonized legal framework across the EU.
The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, the Act also applies to providers and deployers outside the EU whose AI systems are used in the EU market. This means organizations worldwide may need to comply if their AI products or services reach EU users.
The situation is similar to the global reach of General Data Protection Regulation (GDPR). The AI Act applies to providers outside the EU when their AI system output is used in the EU. Non-EU deployers using AI systems in the EU are also covered. This extraterritorial scope means companies worldwide must assess their AI offerings for EU compliance.
The EU AI Act entered into force on 1 August 2024. Prohibitions on unacceptable risk took effect in February 2025, and GPAI obligations in August 2025. The general application date is August 2026. The Digital Omnibus proposal, currently in trilogue, would push high-risk system deadlines to December 2027 (Annex III standalone systems) and August 2028 (Annex I embedded products). These dates are proposed and not yet final.
To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the regulation. Key steps include: conducting an AI systems inventory, classifying systems by risk level, implementing required documentation and risk management systems, ensuring data governance practices, and establishing human oversight mechanisms.
According to the EU AI Act, significant modifications to an AI system can change your role from a deployer to a provider, triggering additional compliance obligations. Key modifications that may reclassify you include: • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI system. • Re-training with New Data: Using new datasets for training that substantially alter the system's performance or behavior. • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components. Implications of becoming a provider include increased responsibilities such as complying with all provider obligations under the Act, including conformity assessments, documentation requirements, and ongoing monitoring obligations.
The Digital Omnibus on AI is a legislative proposal published by the European Commission on 19 November 2025 to amend the EU AI Act (Regulation 2024/1689). It delays high-risk AI deadlines, extends simplified compliance to companies with up to 750 employees, lets sectoral product regulations take precedence over separate AI Act conformity assessments, and broadens the use of sensitive data for bias testing. The Council and Parliament adopted their negotiating positions in March 2026. Trilogue negotiations are expected to conclude by mid-2026.
It depends on your AI system. Prohibited AI practices and AI literacy requirements are already enforceable since February 2025, and GPAI obligations since August 2025. The Omnibus does not change these. For high-risk AI systems, both the Council and Parliament agree on fixed new dates: December 2027 for standalone systems listed in Annex III (biometrics, critical infrastructure, law enforcement) and August 2028 for AI embedded in regulated products under Annex I (medical devices, machinery, vehicles). These dates are proposed and subject to trilogue, but the direction is clear. Do not pause compliance work.
The existing Article 5 prohibitions have applied since February 2025. The Omnibus proposes adding a new ban on AI systems that generate non-consensual intimate imagery of real persons and child sexual abuse material. Both the Council and Parliament support this addition, making it very likely to survive trilogue. This means the Omnibus is not only delaying obligations but also tightening rules in specific areas.
Related Frameworks and Standards
The EU AI Act works alongside other key frameworks. Organizations often combine multiple standards to build a comprehensive AI governance strategy.
ISO/IEC 42001
The international standard for AI management systems — helps demonstrate conformity with the EU AI Act.
NIST AI RMF
The U.S. risk management framework for AI — a complementary approach to identifying and managing AI risks.
AI Governance Guide
A comprehensive guide covering governance principles, risk management, and responsible AI practices.
Ensure Your AI Compliance
Whether you are already using or considering AI in your business, keeping these upcoming regulatory changes in mind is essential. Modulos can support your compliance journey.