The EU AI Act

The EU Artificial Intelligence Act is the most important AI law you’ve never heard of. We give you a brief overview of what’s likely going to be in it, why you should care about it, and what you need to do to get ready.

Managing Risks

The goal of the EU AI Act is to protect citizens and consumers from potentially harmful consequences from the use of AI technology. The draft of the Act proposes strict requirements on how AI-based products and services must be designed, deployed and disclosed. The main criterion is depending on the level of risk posed by the application to individuals or the society as a whole.

  • Unacceptable Risk

    Some AI applications such as social scoring systems or manipulative systems potentially leading to harm are outlawed completely.

  • High Risk

    High-risk applications include services directly affecting citizens’ lives (e.g., evaluating creditworthiness or educational opportunities, applications applied to critical infrastructure). They will have to be put through strict assessment regimes before they can be put on the market. Businesses need to consider whether their existing or planned AI application might be considered “high risk”. The EU will update and expand this list on a regular basis.

  • Limited Risk

    Other AI applications still carry obligations with them, such as disclosing that a user interacted with an AI system. Best practices related to data quality and fairness are essential even in this risk regime. Some examples are image and video processing, recommender systems, and chatbots.

  • Minimal Risk

    Applications such as spam filtering or video games are deemed to carry a minimal risk and as such they are not subject to further regulatory requirements.

How is AI defined?

The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to cover an incredibly wide range of data analysis techniques. That means the EU will consider not just deep learning and complex applications such as self-driving cars as AI. The proposed definition is so broad that many of the technologies used by your business today will fall under it and be regulated:

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems
  • Statistical approaches, Bayesian estimation, search and optimization methods

Compliance Requirements

The draft of the Act lays out in general terms what will be expected of AI systems to mitigate the risks posed to fundamental rights and safety:

  • High quality data
  • Documentation and traceability
  • Transparency
  • Human oversight
  • Accuracy
  • Robustness

While limited risk systems will not face the same compliance scrutiny including conformity assessments and product safety reviews, they will also be evaluated under these categories.

Penalties and Global Reach

The EU AI Act envisions severe penalties for non-compliance. Companies violating the EU AI act can be fined for up to of 6% of their global turnover, or EUR 30M, whichever is higher. GDPR sets the global standard for online privacy and companies around the world must since comply with the EU’s standards if they want to do business in and with Europe.

The EU AI Act will similarly set the global standard for artificial intelligence applications. Already, legislators around the world are looking into the draft EU AI Act and are proposing similar legislations.


European countries will set up their own national mechanisms and institutions for monitoring compliance with an European Artificial Intelligence Board.

It is chaired by the Commission and composed of the European Data Protection Supervisor and representatives of the national supervisory authorities overseeing their actions.


The EU AI Act has the support of the European Commission. It is close to being voted on and passed into law.

EU Commission releases full proposed EU AI Act.

Public consultation period ended.

Negotiations in the EU Parliament started.

French EU Presidency published compromise draft.

Deadline for MEPs to submit amendments.

Deadline for MEPs to submit amendments.

EU lawmakers reached a political agreement.

Minor technical adjustments to the EU AI Act are possible.

Final vote by the EU Parliament.

Act Now

Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.