Get Ready for the EU AI Act

The EU Artificial Intelligence Act is setting a global standard for AI regulation, as GDPR did for data privacy. Here, we provide an overview of the Act and guide you on how to prepare your AI systems for compliance.

How is AI Defined?

According to the EU AI Act, an ‘Artificial Intelligence system’ is defined as any machine-based system that operates autonomously to generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments.

This broad definition, inspired by OECD guidelines, is designed to be future-proof, which means that it covers a wide range of technologies including Generative AI, deep learning, but also more conventional data analysis techniques.

How to Assess if You Are Using an "AI System"

Characteristics of AI SystemsExamples of Spam FilterExamples of Virtual AssistantsExamples of Credit Scoring Systems
Varying Levels of Autonomy

AI systems may exhibit adaptiveness and can operate independently from human involvement.
Operates without human involvement and refines through feedback. Performs tasks based on voice commands without human intervention. Evaluates creditworthiness autonomously using data inputs.
Explicit or Implicit Objectives

Objectives can be set by humans or implicit in the tasks and data.
Identifies spam emails and similar messages. Assists with various tasks such as setting reminders, providing information, or controlling smart home devices. Assesses financial risk for lending decisions.
Generating Outputs

AI systems infer how to generate outputs that influence physical or virtual environments.
Learns from examples of spam emails to distinguish them. Improves accuracy and relevance of responses through user interaction. Continuously updates scoring models based on new financial data.
Self-Learning Capabilities

AI systems can change while in use, adapting to new data or tasks.
Changes the contents of your inbox and spam folder. Enhances user productivity and convenience in managing daily activities. Influences loan approval processes and interest rates offered to applicants.

Risk-Based Classification

The EU AI Act introduces a risk-based classification for AI applications, categorizing them from minimal risk to banned applications based on their impact on individuals and society.

  • Unacceptable Risk

    AI applications such as social scoring systems and manipulative technologies are banned due to their potential to cause harm.

  • High Risk

    High-risk AI applications, like those evaluating creditworthiness or critical infrastructure, require rigorous assessment before market entry. Businesses must determine whether their existing or planned AI applications fall under this category and prepare for strict compliance reviews.

  • Limited Risk

    Image and video processing, recommender systems, or chatbots, still carry obligations with them, such as disclosing that a user interacted with an AI system. Data quality, transparency, and fairness standards are essential even for the applications with limited risk.

  • Minimal Risk

    AI applications deemed to have minimal risk, such as spam filters and video games, are not subjected to further regulatory requirements.

Compliance Requirements

The Act lays out a range of requirements for high-risk AI systems, covering:

  • Risk Management System
  • Data and Data Governance
  • Technical Documentation
  • Record Keeping
  • Transparency and provision of information to user
  • Human Oversight
  • Accuracy, Robustness and Cybersecurity
  • Quality Management System
  • Fundamental Rights Impact Assessment

Limited risk systems are evaluated under the same categories, but face fewer scrutiny levels.

Aligning with industry standards like ISO/IEC 42001:2023 can support organizations in meeting these compliance requirements. ISO/IEC 42001 provides a structured approach to managing AI risks, ensuring data quality, and maintaining robust documentation.

Conformity Assessments

High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This includes generating and maintaining extensive documentation and evidence.

Step 1 - A high-risk AI system is developed

Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.

Step 2 - The system undergoes the conformity assessment and complies with AI requirements

- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.

- Maintain up-to-date technical documentation in a clear and comprehensive manner.

Step 3 - Registration of stand-alone systems in an EU database.

- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.

- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.

Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking

- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

- Ensure proper human oversight during the period the system is in use.

Disclaimer:

The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.

Understanding Roles and Responsibilities

The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle. Each role comes with distinct obligations and impacts under the regulation.
Here's a brief overview:

Providers

Role: Develop and market AI systems

Responsibilities: Maintain technical documentation, ensure compliance with the Act, and provide transparency information.

Deployers

Role: Use AI systems within their operations.

Responsibilities: Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.

Importers

Role: Market AI systems from third countries.

Responsibilities: Verify compliance, provide necessary documentation, and cooperate with authorities.

Distributors

Role: Make AI Systems available in the market

Responsibilities: Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.

Modifying AI Systems

Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.

Download the EU AI Act Guide

Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.

Timeline and Compliance Milestones

In April 2021, the EU Commission released the full proposed EU AI Act, initiating the legislative process. On 12 July 2024, the European Union Artificial Intelligence Act was published in the Official Journal of the European Union, marking the final step in the AI Act's legislative journey.

The Act officially entered into force on 1 August 2024. By 2 February 2025, all providers and deployers of AI systems need to ensure, to their best extent, a sufficient level of AI literacy among staff dealing with the operation and use of AI systems. The Act will become fully applicable in August 2026, except for specific provisions.

  • August 2024

    The Act officially enters into force

  • 6 Months After
    (February 2025)

    Prohibitions on unacceptable risk enter into force

  • 12 Months After
    (August 2025)

    Obligations for providers of general-purpose AI models go into effect

  • 18 Months After
    (February 2026)

    Commission implementing act on post-market monitoring.

  • 24 Months After
    (August 2026)

    Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement

  • 36 Months After
    (August 2027)

    Obligations for high-risk AI systems as safety components or products requiring third-party conformity assessments.

  • By End of 2030

    Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice.

Penalties for Non-Compliance

The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. Provisions include more proportionate caps on administrative fines for SMEs and start-ups.

Ensure your AI systems comply with the EU AI Act to avoid these penalties.

Penalty Breakdown

Non-compliance with prohibitions

Up to

€35M

or 7% of turnover

Supplying incorrect, incomplete, or misleading information

Up to

€7.5M

or 1.5% of turnover

Non-compliance with other obligations

Up to

€15M

or 3% of turnover

FAQ about EU AI Act

What is the EU AI Act?

The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and used within the EU. The Act proposes a risk-based approach, classifying AI applications on a spectrum from no risk to banned entirely. In its approach, the Act is similar to other product safety laws in the EU and it shares many aspects with GDPR, including serious penalties for violations of up to 7% of global turnover.

Who will be affected by the EU AI Act?

The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, both providers and users situated outside the EU are also obligated to abide by these guidelines if the outcomes of their AI systems are used within the EU. Nonetheless, organizations using AI for military purposes and public agencies in countries outside the EU are exempt from this regulation.

How are companies outside of the EU impacted by the EU AI Act?

The situation is similar to the global reach of General Data Protection Regulation (GDPR). GDPR’s reach is global and the same will be the case with the AI Act. Even if a non-European company does not plan to serve the EU market, it may still be necessary to comply to mitigate the legal risk associated with offering AI-based products and services. It is sufficient for the output of the AI system to be in the EU for the AI Act to apply.

What is the timeline for the EU AI Act?

On 1 August 2024, the EU AI Act officially entered into force. The Act will become fully applicable 24 months after this date, except for specific provisions. One notable provision is that by 2 February 2025, all providers and deployers of AI systems must ensure, to the best of their ability, a sufficient level of AI literacy among staff involved in the operation and use of these systems.

How can companies be ready for the EU AI Act?

To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the EU AI Act. They should conduct a comprehensive audit of their existing AI systems to assess compliance and consult legal experts to understand the implications for their specific operations. Ongoing monitoring and staff training are also essential to ensure that both current and future AI technologies meet the regulatory requirements.

When do you become a provider if you modify an AI system?

According to the EU AI Act, significant modifications to an AI system can change your role from a deployer, importer, or distributor to a provider. Significant modifications include:

  • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI * system.
  • Re-training with New Data: Using new datasets for training that substantially alter the system’s performance or behavior.
  • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components.

Implications of becoming a provider include increased responsibilities such as complying with all provider obligations, maintaining detailed technical documentation, ensuring compliance with the Act’s requirements, and providing transparency information. You may also be subject to additional scrutiny and regulatory requirements.

Ensure Your AI Compliance

Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.