Get Ready for the EU AI Act

The EU Artificial Intelligence Act is setting a global standard for AI regulation, as GDPR did for data privacy. Here, we provide an overview of the Act and guide you on how to prepare your AI systems for compliance.

How is an AI System Defined?

According to the EU AI Act, an 'Artificial Intelligence system' is defined as a machine-based system designed to operate with varying levels of autonomy. It may exhibit adaptiveness after deployment and, for explicit or implicit objectives, infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The following table outlines the core components and characteristics of this definition:

CategoryWhat This Covers Examples
Machine-Based System Relies on hardware and software to function (data processing, model training, decision-making, etc.). Includes quantum and biological systems if they enable computation. Traditional servers running trained models, Quantum computers running AI algorithms, Cloud-based “AI-as-a-Service” solutions.
Varying Levels of Autonomy Operates with some degree of independence from direct human control. Can range from semi-automated to fully automated (human involvement is partial or optional). Chatbots that respond to user queries but let humans override them, Autonomous drones or robots that need minimal human input once deployed.
Adaptiveness (Optional) May evolve or learn post-deployment (self-learning, model updates). Not strictly required for a system to be considered AI, but common in many AI applications. Recommendation algorithms that refine suggestions with each user interaction, Machine learning models that retrain themselves when new data arrives.
Objectives (Explicit/Implicit) Systems can be programmed with clear goals (explicit) or develop them from patterns in data (implicit). Differs from “intended purpose,” which is the real-world function or deployment scenario. A language model aiming to minimise prediction errors vs. a chatbot intended for legal consulting, An image classifier coded to identify cats but deployed to moderate user-generated content.
Infers How to Generate Outputs
Core feature distinguishing AI from simpler software: uses machine learning or logic-based inference rather than solely fixed, human-defined rules. Supervised learning (spam detection), unsupervised learning (anomaly detection), reinforcement learning (robot navigation), symbolic reasoning (expert systems).
Generates Outputs with Real Impact Produces predictions, recommendations, content, or decisions that can shape physical or virtual environments. Emphasises tangible influence on processes, people, or infrastructure. Predictive maintenance in factories, Generative text/image models in digital marketing, Automated hiring decisions or medical diagnostics.
Interaction with Environments AI systems aren’t passive; they actively change or affect the context in which they’re deployed. This can include digital ecosystems or physical settings. Self-driving cars adjusting speed in traffic, An AI content-filtering system that moderates an online forum’s posts, Automated trading bots that buy and sell in financial markets.

Risk-Based Classification

The EU AI Act introduces a risk-based classification for AI applications, categorizing them from minimal risk to banned applications based on their impact on individuals and society.

The AI Act introduces various classifications for AI systems which may overlap. Some AI use cases may be classified as Prohibited or High Risk, while others carry transparency requirements with them. General Purpose AI Systems (GPAIs) have other requirements while a range of applications are partially or entirely out of scope for the AI Act.

  • Unacceptable Risk

    AI applications such as social scoring systems and manipulative technologies are banned due to their potential to cause harm.

  • High Risk

    High-risk AI applications, like those evaluating creditworthiness or critical infrastructure, require rigorous assessment before market entry. Businesses must determine whether their existing or planned AI applications fall under this category and prepare for strict compliance reviews.

  • Limited Risk

    Image and video processing, recommender systems, or chatbots, still carry obligations with them, such as disclosing that a user interacted with an AI system. Data quality, transparency, and fairness standards are essential even for the applications with limited risk.

  • Minimal Risk

    AI applications deemed to have minimal risk, such as spam filters and video games, are not subjected to further regulatory requirements.

Compliance Requirements

The Act lays out a range of requirements for high-risk AI systems, covering:

  • Risk Management System
  • Data and Data Governance
  • Technical Documentation
  • Record Keeping
  • Transparency and provision of information to user
  • Human Oversight
  • Accuracy, Robustness and Cybersecurity
  • Quality Management System
  • Fundamental Rights Impact Assessment

Limited risk systems are evaluated under the same categories, but face fewer scrutiny levels.

Aligning with industry standards like ISO/IEC 42001:2023 – AI Management System – can help organizations address some of the EU AI Act’s compliance requirements. ISO 42001 provides a structured approach to managing AI risks, ensuring data quality, and maintaining robust documentation.

Conformity Assessments

High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This includes generating and maintaining extensive documentation and evidence.

Step 1 - A high-risk AI system is developed

Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.

Step 2 - The system undergoes the conformity assessment and complies with AI requirements

- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.

- Maintain up-to-date technical documentation in a clear and comprehensive manner.

Step 3 - Registration of stand-alone systems in an EU database.

- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.

- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.

Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking

- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

- Ensure proper human oversight during the period the system is in use.

Disclaimer:

The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.

Understanding Roles and Responsibilities

The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle. Each role comes with distinct obligations and impacts under the regulation.
Here's a brief overview:

Providers

Role: Develop and market AI systems

Responsibilities: Maintain technical documentation, ensure compliance with the Act, and provide transparency information.

Deployers

Role: Use AI systems within their operations.

Responsibilities: Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.

Importers

Role: Market AI systems from third countries.

Responsibilities: Verify compliance, provide necessary documentation, and cooperate with authorities.

Distributors

Role: Make AI Systems available in the market

Responsibilities: Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.

Modifying AI Systems

Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.

Download the EU AI Act Guide

Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.

Timeline and Compliance Milestones

In April 2021, the EU Commission released the full proposed EU AI Act, initiating the legislative process. On 12 July 2024, the European Union Artificial Intelligence Act was published in the Official Journal of the European Union, marking the final step in the AI Act's legislative journey.

The Act officially entered into force on 1 August 2024. By 2 February 2025, all providers and deployers of AI systems needed to ensure, to their best extent, a sufficient level of AI literacy among staff dealing with the operation and use of AI systems. The Act will become fully applicable in August 2026, except for specific provisions.

  • August 2024

    The Act officially enters into force

  • 6 Months After
    (February 2025)

    Prohibitions on unacceptable risk enter into force and the implementation of AI literacy requirements

  • 12 Months After
    (August 2025)

    Obligations for GPAI providers, as well as regulations on notifications to authorities and fines go into effect

  • 18 Months After
    (February 2026)

    Commission implementing act on post-market monitoring

  • 24 Months After
    (August 2026)

    Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement

  • 36 Months After
    (August 2027)

    Obligations for high-risk AI systems as a safety components or products requiring third-party conformity assessments and the entire EU AI Act becomes applicable

  • By End of 2030

    Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice

Penalties for Non-Compliance

The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. Provisions include more proportionate caps on administrative fines for SMEs and start-ups.

Ensure your AI systems comply with the EU AI Act to avoid these penalties.

Penalty Breakdown

Non-compliance with prohibitions

Up to

€35M

or 7% of turnover

Supplying incorrect, incomplete, or misleading information

Up to

€7.5M

or 1.5% of turnover

Non-compliance with other obligations

Up to

€15M

or 3% of turnover

FAQ about EU AI Act

What is the EU AI Act?

The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and used within the EU. The Act proposes a risk-based approach, classifying AI applications on a spectrum from no risk to banned entirely. In its approach, the Act is similar to other product safety laws in the EU and it shares many aspects with GDPR, including serious penalties for violations of up to 7% of global turnover.

Who will be affected by the EU AI Act?

The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, both providers and users situated outside the EU are also obligated to abide by these guidelines if the outcomes of their AI systems are used within the EU. Organizations using AI for military purposes and public agencies in countries outside the EU are exempt from this regulation, same as private projects and pure research.

How are companies outside of the EU impacted by the EU AI Act?

The situation is similar to the global reach of General Data Protection Regulation (GDPR). The AI Act applies as long as the AI system is on the EU market, or its outputs have effects in the EU. Even if a non-European company does not plan to serve the EU market, it may still be necessary to comply to mitigate the legal risk associated with offering AI-based products and services.

What is the timeline for the EU AI Act?

On 1 August 2024, the EU AI Act officially entered into force. The Act will become fully applicable 24 months after this date, except for specific provisions. One notable provision was that by 2 February 2025, all providers and deployers of AI systems needed to ensure, to the best of their ability, a sufficient level of AI literacy among staff involved in the operation and use of these systems.

How can companies be ready for the EU AI Act?

To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the EU AI Act. They should conduct a comprehensive audit of their existing AI systems to assess compliance and consult legal experts to understand the implications for their specific operations. Ongoing monitoring and staff training are also essential to ensure that both current and future AI technologies meet the regulatory requirements.

When do you become a provider if you modify an AI system?

According to the EU AI Act, significant modifications to an AI system can change your role from a deployer, importer, or distributor to a provider. Significant modifications include:

  • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI * system.
  • Re-training with New Data: Using new datasets for training that substantially alter the system’s performance or behavior.
  • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components.

Implications of becoming a provider include increased responsibilities such as complying with all provider obligations, maintaining detailed technical documentation, ensuring compliance with the Act’s requirements, and providing transparency information. You may also be subject to additional scrutiny and regulatory requirements.

Ensure Your AI Compliance

Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.