AI Regulation
is here

The EU Artificial Intelligence Act is setting the global standard for AI as GDPR did for data privacy. We give you a brief overview about the Act and how you can get ready.

Risk-based Classification

The EU AI Act introduces a risk-based classification scheme for AI applications. The main criterion is the level of risk posed by the AI application to individuals or society as a whole. The classification ranges from minimal risk to applications which are banned entirely.

  • Unacceptable Risk

    Some AI applications such as social scoring systems or manipulative systems potentially leading to harm are outlawed completely.
  • High Risk

    High-risk applications include services directly affecting citizens’ lives (e.g., evaluating creditworthiness or educational opportunities, applications applied to critical infrastructure). They will have to be put through strict assessment regimes before they can be put on the market. Businesses need to consider whether their existing or planned AI application might be considered “high risk”. The EU will update and expand this list on a regular basis.
  • Limited Risk

    Other AI applications still carry obligations with them, such as disclosing that a user interacted with an AI system. Best practices related to data quality and fairness are essential even in this risk regime. Some examples are image and video processing, recommender systems, and chatbots.
  • Minimal Risk

    Applications such as spam filtering or video games are deemed to carry a minimal risk and as such they are not subject to further regulatory requirements

How is AI defined?

The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to cover an incredibly wide range of data analysis techniques. That means the EU will consider not just deep learning and complex applications such as self-driving cars as AI. The proposed definition is so broad that many of the technologies used by your business today will fall under it and be regulated:

‘Artificial Intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.*

 

*OECD definition of AI

Conformity Assessment

Compliance Requirements

The Act lays out a range of requirements for high risk AI systems from the design, implementation and post-market entry phases. These include:

  • Risk Management System (Article 9)
  • Data and Data Governance (Article 10)
  • Technical Documentation (Article 11 and Annex IV)
  • Record Keeping (Article 12)
  • Transparency and provision of information to user (Article 13)
  • Human Oversight (Article 14)
  • Accuracy, Robustness and Cybersecurity (Article 15)
  • Quality Management System (Article 17)
  • Fundamental Rights Impact Assessment

While limited risk systems will not face the same compliance scrutiny including conformity assessments and product safety reviews, they will also be evaluated under these categories.

Conformity Assessments in the EU AI Act

High Risk AI Systems will have to undergo a Conformity Assessment (Article 19) to demonstrate adherence to the AI Act before being placed on the market in the EU. You are required to generate and collect the documentation and evidence for such an Assessment.

Timeline

The AI Act has been passed by the EU Parliament and has reached a political agreement during the Trilogue phase between Parliament, Commission and Council. After a second vote in the Parliament and Council, the Act will go into effect.

  • April 2021

    EU Commission releases full proposed EU AI Act.

  • August 2021

    Public consultation period ended.

  • December 2021

    Negotiations in the EU Parliament started.

  • February 2022

    French EU Presidency published compromise draft.

  • April 2022

    Deadline for MEPs to submit amendments.

  • June 2022

    Deadline for MEPs to submit amendments.

  • April 2023

    EU lawmakers reached a political agreement.

  • May 2023

    Minor technical adjustments to the EU AI Act are possible.

  • June 2023

    EU Parliament Plenary Vote.

  • July 2023

    The first operational trilogue.

  • December 2023

    Political Agreement on the AI Act

  • Early 2024

    Final vote in Parliament and Council

  • 2025?

    Penalties for non compliant begin

Penalties of the EU AI Act

The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.

Non-compliance with prohibitions

up to35M

or 7% of turnover

Non-compliance with other obligations

up to15M

or 3% of turnover

Supplying incorrect, incomplete, or misleading information

up to7.5M

or 1.5% of turnover

FAQ about EU AI Act

What is the EU AI Act ?
The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and used within the EU. The Act proposes a risk-based approach, classifying AI applications on a spectrum from no risk to banned entirely. In its approach, the Act is similar to other product safety laws in the EU and it shares many aspects with GDPR, including serious penalties for violations of up to 7% of global turnover.
Who will be affected by the EU AI Act?
The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, both providers and users situated outside the EU are also obligated to abide by these guidelines if the outcomes of their AI systems are used within the EU. Nonetheless, organizations using AI for military purposes and public agencies in countries outside the EU are exempt from this regulation.
How are companies based in Switzerland impacted by the EU AI Act?
The situation is similar to the global reach of General Data Protection Regulation (GDPR). GDPR’s reach is global and the same will be the case with the AI Act. Even if a Swiss company does not plan to serve the EU market, it may still be necessary to comply to mitigate the legal risk associated with offering AI-based products and services.
What is the timeline for the EU AI Act?
The European Commission aims to adopt the AI Act within the year, targeting a deadline of February 2024. This schedule is influenced by the forthcoming European Parliament elections and the formation of a new European Commission slated for May 2024.
How can companies be ready for the EU AI Act?
To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the EU AI Act. They should conduct a comprehensive audit of their existing AI systems to assess compliance and consult legal experts to understand the implications for their specific operations. Ongoing monitoring and staff training are also essential to ensure that both current and future AI technologies meet the regulatory requirements.

Global AI Regulation

Click a country to read its AI regulation

European Union

The EU AI Act aims to set global AI regulations much like GDPR did. With extraterritorial reach and a risk-based approach, it targets protecting citizen and consumer rights. Penalties could reach 7% of global turnover.

On December 8, 2023, the European Parliament and the Council of the European Union reached a political agreement on the EU AI Act.

  • You can read more about the European Parliament’s position on the AI Act by visiting this official document.

Act Now

Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.