Understanding the EU AI Act for Businesses: A Comprehensive Guide

The European Parliament’s decision on 14 June 2023 marked a significant milestone as it advanced the EU AI Act, aiming to establish global leadership in AI regulation and foster a trusted ecosystem that effectively manages AI risks while upholding human rights. As the first comprehensive legislation of its kind, the Act will have wide-ranging implications for numerous AI systems utilized within the EU.

This blog post provides a comprehensive overview of the EU AI Act, from its objectives and the EU’s regulatory approach towards AI, to understanding the obligations for high-risk systems. We further discuss the consequences of non-compliance and explain how businesses can prepare and align themselves with these regulations, highlighting the role of Modulos Responsible AI Platform, which can support you in being in line with the EU AI Act.

1. Exploring the EU AI Act: A Global Gold Standard for AI Regulation

The EU AI Act, aiming to establish a global AI regulatory standard, has wide-reaching implications for entities operating in or interacting with the EU market, regardless of their location. It adopts a sector-agnostic approach to ensure consistent and proportionate standards, preventing the deployment of potentially harmful systems while facilitating innovation. Central to the regulation is the protection of EU citizens’ fundamental rights and the prevention of avoidable harm. The Act defines artificial intelligence as a “machine-based system operating with varying levels of autonomy, generating outputs that influence physical or virtual environments”.

1.1 Who is impacted by the EU AI Act?

Providers of AI systems, regardless of their location, are subject to the Act when used in the EU. Deployers and distributors within the EU, as well as entities importing AI systems, product manufacturers, and authorized representatives, also fall under its scope. This global impact emphasizes the Act’s significance for parties involved in the design, development, deployment, and use of AI systems within the EU.

1.2 Who is not impacted by the EU AI Act? 

To strike a balance between innovation and safety, the Act exempts AI systems used solely for research, testing, and development, as long as they are not tested in real-world conditions and adhere to fundamental rights and legal obligations. Additionally, public authorities of third countries, international organizations operating under exclusive military purposes, and AI components provided under free and open-source licenses (unless they are foundational models) are excluded from the legislation.

2. Unveiling AI Regulation in the EU: A Framework Based on Four Risk Categories

The EU AI Act places EU citizens as a central focus, aiming to implement safeguards that minimize avoidable harm while also prioritizing innovation by avoiding stifling obligations. The Act categorizes AI systems into four risk categories:

  • Minimal risk: This category includes systems like spam filters or AI-based video games and does not impose any obligations.
  • Limited risk: Systems with some level of risk, such as deep fakes, have transparency obligations to inform users when interacting with AI-generated or manipulated content.
  • High risk: AI systems that can potentially harm the health, safety, or fundamental rights of individuals fall into this category, facing the most stringent obligations.
  • Unacceptable risk: Systems posing an unacceptable level of risk, such as real-time biometric identification and those employing subliminal techniques, are prohibited from being used or made available in the EU.

2.1 Determining High-Risk Classification for Your AI System

Article 6 of the EU AI Act designates an AI system as high-risk based on particular criteria, such as when the system is a safety component of a product or subject to specific EU harmonization laws cited in Annex II, necessitating an independent assessment of health and safety risks.

This encompasses products subject to safety regulations, such as toys, lifts, pressure equipment, and diagnostic medical devices, as detailed in Annex II. Furthermore, Annex III identifies eight use cases considered high-risk if they could substantially endanger the health, safety, or basic rights of individuals.

Even though the definition of high-risk AI systems remains somewhat unclear, a report by KI Bundesverband e.V. (2022) suggests that between 33% and 50% of AI Systems could fall under the high-risk category. This estimate significantly exceeds the EU Commission’s initial assumption in their Policy Impact Assessment, which predicted only 5-15% of AI systems would be high-risk. The potentially large proportion of high-risk AI systems emphasizes the importance of promptly understanding and adhering to the requirements of the EU AI Act.

To assist in the evaluation of these risks, the European Commission, in collaboration with the AI Office and relevant stakeholders, will issue guidelines six months before the enforcement of the Act. These guidelines will provide clarity on scenarios where these system’s outcomes could significantly impact the health, safety, or fundamental rights of individuals.

2.1.1 The use cases that fall under the high-risk category include: 

  • Biometric and biometric-based systems: encompass technologies used for identifying individuals based on their biometric data and inferring personal characteristics, such as emotion recognition, while excluding systems solely used for confirming a specific person’s identity.
  • Systems for critical digital infrastructure: include high-risk systems that can cause significant environmental harm, such as those used for managing transportation (except regulated ones) and safety components in essential supplies like water, gas, heating, electricity, or critical digital infrastructure.
  • Education and vocational training systems: encompass systems that determine or influence access, admission, and assignments to educational institutions, as well as systems assessing students for admission and determining appropriate education levels, and systems monitoring and detecting prohibited student behavior, all falling within the scope of the legislation.
  • Systems influencing employment, worker management and access to self-employment: include high-risk systems used for recruitment, selection, and decision-making processes, such as targeted job ad placement, performance evaluation in interviews or tests, application screening and candidate evaluation, as well as systems used for promotion, termination, task allocation, monitoring, and evaluating performance and behavior.
  • Systems affecting access and use of private and public services and benefits: include AI systems employed by public authorities for assessing eligibility, granting, revoking, increasing, or reclaiming benefits and services in areas such as healthcare, housing, utilities (electricity, heating/cooling), and internet, along with credit scoring systems (excluding fraud detection) and emergency-related systems for evaluating and prioritizing dispatch of first responders, including police, firefighters, and medical aid.
  • Systems used in law enforcement: involve AI tools used by law enforcement agencies or EU entities for evidence evaluation, individual profiling, crime analytics, and polygraph-like applications in criminal investigation and prosecution.
  • Systems used in migration, asylum and border control management: encompass AI tools used by public authorities or EU agencies for risk assessment, document verification, application processing, border monitoring, and trend forecasting in relation to migration and border crossing.
  • Systems used in the administration of justice and democratic processes: comprise AI tools utilized by judicial authorities for research, interpretation, and application of facts and laws, as well as systems intended to influence voting behavior and election outcomes (excluding non-individual-exposed systems), including recent additions for Very Large Online Platforms under the Digital Services Act, and systems used for organizing political campaigns.

To delve deeper into the nuances of the use cases, including their unique exceptions, we recommend exploring pages 112-116 of this comprehensive resource: Europarl Library Document.

3. Obligations for High-Risk AI Systems: Understanding the Requirements

Compliance requirements for high-risk systems are contingent upon the associated entity type, entailing seven overarching obligations along with an additional requirement exclusively for providers of foundational models.

Confirmation of compliance with these obligations necessitates undergoing a conformity assessment. Once successfully assessed, the systems are mandated to display the CE marking, either digitally or physically depending on their nature, before being made available in the market. Additionally, these systems must be registered in a public database

In case of substantial modifications to the system, such as retraining the model on new data or removing certain features, this process must be repeated.

4. Penalties for Non-Compliance: Consequences of Violations under the EU AI Act

Prioritizing compliance with the Act’s obligations is crucial for organizations to mitigate substantial financial and reputational consequences

Non-compliance carries the risk of significant penalties, which can amount to €30 million or 6% of global turnover (whichever is higher). The severity of fines varies depending on the offense, ranging from prohibited system use at the more severe end to providing incorrect or misleading information at the less severe end, potentially resulting in fines of up to €10 million or 2% of turnover.

5. Preparing for the EU Commission’s AI Act: A Comprehensive Approach for Companies

Businesses engaged in the development or deployment of AI systems within the European Union must dedicate significant efforts to ensure compliance with the EU AI Act. The Act’s comprehensive provisions can be complex to navigate, highlighting the importance for companies to leverage the preparatory period effectively. 

With a timeline of approximately two and a half years remaining until the Act is enforced, businesses must strategically employ this period to bolster their preparedness. This involves the creation of robust governance structures, the development of internal competencies, and the deployment of requisite technologies.

To effectively prepare for compliance with the EU AI Act, companies should:

  1. Compile a comprehensive inventory of AI systems that they develop and/or deploy, both within the EU and globally, inclusive of each system’s intended purpose and capabilities.
  2. Establish clear governance procedures that define the rules for AI use and ensure compliance with the AI Act. Transparent guidelines should be implemented to align AI applications with the Act’s provisions.
  3. Foster competence within the organization by cultivating an environment of understanding, application, and adherence to the rules of the AI Act. This process entails developing and nurturing the necessary expertise for interpreting and complying with the legislative framework.
  4. Implement the required technology infrastructure to efficiently address the demands outlined in the AI Act, ensuring that the necessary tools and systems are in place.

By following these steps, companies can proactively prepare themselves to meet the requirements of the EU AI Act and ensure compliance in the coming years.

6. Alignment with AI Regulations: Facilitating Your Company’s Compliance through Modulos

At Modulos, we offer advanced AI solutions that are tailored to meet these new regulatory standards. The Modulos Responsible AI platform for trustworthy AI empowers you to improve your data quality and take assertive measures against discriminatory biases. Modulos can provide support in various ways:

Systematic Fairness Framework

  • Structured guidance for effectively addressing bias and data quality issues, enabling users to achieve use case-specific fairness and performance objectives in alignment with regulatory requirements and internal policies.
  • Expertly designed user stories with a comprehensive collection of scopes and tasks facilitating the assessment of AI application trustworthiness and the adoption of suitable mitigation strategies.
  • Flexibility in customizing user stories, allowing users to set controls and constraints that drive the development of trustworthy AI solutions.

Algorithmic Tools

  • A comprehensive collection of algorithmic tools to streamline task execution within each project scope.
  • Data-Centric AI methodologies to evaluate the impact of individual samples on fairness.
  • Default code snippets within the platform to expedite and streamline task execution related to the assessment and mitigation strategies for an AI system.

Storage and Reporting

  • Artifact storage to facilitate the compilation of a Responsible AI report, supporting regulatory compliance
  • API-based architecture, enabling seamless integration and exchange with third-party applications to ensure upstream auditability and the establishment of governance mechanisms throughout the AI lifecycle
  • Documentation and reproducibility of project workflows for transparency and accountability

Our team is well-equipped to guide you through every step of the process,

making the path to regulatory readiness smooth and straightforward.

We aim to elevate your business operations while fostering ethical

and responsible AI usage.

Schedule a demo to find out more about how Modulos can support you to be in line with the EU AI Act.