Step 1 - A high-risk AI system is developed
Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.
The EU Artificial Intelligence Act is setting a global standard for AI regulation, as GDPR did for data privacy. Here, we provide an overview of the Act and guide you on how to prepare your AI systems for compliance.
According to the EU AI Act, an ‘Artificial Intelligence system’ is defined as any machine-based system that operates autonomously to generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments.
This broad definition, inspired by OECD guidelines, is designed to be future-proof, which means that it covers a wide range of technologies including Generative AI, deep learning, but also more conventional data analysis techniques.
Characteristics of AI Systems | Examples of Spam Filter | Examples of Virtual Assistants | Examples of Credit Scoring Systems |
---|---|---|---|
Varying Levels of Autonomy AI systems may exhibit adaptiveness and can operate independently from human involvement. | Operates without human involvement and refines through feedback. | Performs tasks based on voice commands without human intervention. | Evaluates creditworthiness autonomously using data inputs. |
Explicit or Implicit Objectives Objectives can be set by humans or implicit in the tasks and data. | Identifies spam emails and similar messages. | Assists with various tasks such as setting reminders, providing information, or controlling smart home devices. | Assesses financial risk for lending decisions. |
Generating Outputs AI systems infer how to generate outputs that influence physical or virtual environments. | Learns from examples of spam emails to distinguish them. | Improves accuracy and relevance of responses through user interaction. | Continuously updates scoring models based on new financial data. |
Self-Learning Capabilities AI systems can change while in use, adapting to new data or tasks. | Changes the contents of your inbox and spam folder. | Enhances user productivity and convenience in managing daily activities. | Influences loan approval processes and interest rates offered to applicants. |
The EU AI Act introduces a risk-based classification for AI applications, categorizing them from minimal risk to banned applications based on their impact on individuals and society.
AI applications such as social scoring systems and manipulative technologies are banned due to their potential to cause harm.
High-risk AI applications, like those evaluating creditworthiness or critical infrastructure, require rigorous assessment before market entry. Businesses must determine whether their existing or planned AI applications fall under this category and prepare for strict compliance reviews.
Image and video processing, recommender systems, or chatbots, still carry obligations with them, such as disclosing that a user interacted with an AI system. Data quality, transparency, and fairness standards are essential even for the applications with limited risk.
AI applications deemed to have minimal risk, such as spam filters and video games, are not subjected to further regulatory requirements.
The Act lays out a range of requirements for high-risk AI systems, covering:
Limited risk systems are evaluated under the same categories, but face fewer scrutiny levels.
Aligning with industry standards like ISO/IEC 42001:2023 can support organizations in meeting these compliance requirements. ISO/IEC 42001 provides a structured approach to managing AI risks, ensuring data quality, and maintaining robust documentation.
High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This includes generating and maintaining extensive documentation and evidence.
Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.
- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.
- Maintain up-to-date technical documentation in a clear and comprehensive manner.
- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.
- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.
- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.
- Ensure proper human oversight during the period the system is in use.
The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle. Each role comes with distinct obligations and impacts under the regulation.
Here's a brief overview:
Role: Develop and market AI systems
Responsibilities: Maintain technical documentation, ensure compliance with the Act, and provide transparency information.
Role: Use AI systems within their operations.
Responsibilities: Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.
Role: Market AI systems from third countries.
Responsibilities: Verify compliance, provide necessary documentation, and cooperate with authorities.
Role: Make AI Systems available in the market
Responsibilities: Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.
Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.
In April 2021, the EU Commission released the full proposed EU AI Act, initiating the legislative process. On 12 July 2024, the European Union Artificial Intelligence Act was published in the Official Journal of the European Union, marking the final step in the AI Act's legislative journey.
The Act officially entered into force on 1 August 2024. By 2 February 2025, all providers and deployers of AI systems need to ensure, to their best extent, a sufficient level of AI literacy among staff dealing with the operation and use of AI systems. The Act will become fully applicable in August 2026, except for specific provisions.
The Act officially enters into force
Prohibitions on unacceptable risk enter into force
Obligations for providers of general-purpose AI models go into effect
Commission implementing act on post-market monitoring.
Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement
Obligations for high-risk AI systems as safety components or products requiring third-party conformity assessments.
Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice.
The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. Provisions include more proportionate caps on administrative fines for SMEs and start-ups.
Ensure your AI systems comply with the EU AI Act to avoid these penalties.
Up to
€35M
or 7% of turnover
Up to
€7.5M
or 1.5% of turnover
Up to
€15M
or 3% of turnover
The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and used within the EU. The Act proposes a risk-based approach, classifying AI applications on a spectrum from no risk to banned entirely. In its approach, the Act is similar to other product safety laws in the EU and it shares many aspects with GDPR, including serious penalties for violations of up to 7% of global turnover.
The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, both providers and users situated outside the EU are also obligated to abide by these guidelines if the outcomes of their AI systems are used within the EU. Nonetheless, organizations using AI for military purposes and public agencies in countries outside the EU are exempt from this regulation.
The situation is similar to the global reach of General Data Protection Regulation (GDPR). GDPR’s reach is global and the same will be the case with the AI Act. Even if a non-European company does not plan to serve the EU market, it may still be necessary to comply to mitigate the legal risk associated with offering AI-based products and services. It is sufficient for the output of the AI system to be in the EU for the AI Act to apply.
On 1 August 2024, the EU AI Act officially entered into force. The Act will become fully applicable 24 months after this date, except for specific provisions. One notable provision is that by 2 February 2025, all providers and deployers of AI systems must ensure, to the best of their ability, a sufficient level of AI literacy among staff involved in the operation and use of these systems.
To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the EU AI Act. They should conduct a comprehensive audit of their existing AI systems to assess compliance and consult legal experts to understand the implications for their specific operations. Ongoing monitoring and staff training are also essential to ensure that both current and future AI technologies meet the regulatory requirements.
According to the EU AI Act, significant modifications to an AI system can change your role from a deployer, importer, or distributor to a provider. Significant modifications include:
Implications of becoming a provider include increased responsibilities such as complying with all provider obligations, maintaining detailed technical documentation, ensuring compliance with the Act’s requirements, and providing transparency information. You may also be subject to additional scrutiny and regulatory requirements.
Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.