AI Regulation
is coming
The EU Artificial Intelligence Act is setting the global standard for AI as GDPR did for data privacy. We give you a brief overview about the Act and how you can get ready.
Risk-based Classification
The EU AI Act introduces a risk-based classification scheme for AI applications. The main criterion is the level of risk posed by the AI application to individuals or society as a whole. The classification ranges from minimal risk to applications which are banned entirely.
-
Unacceptable Risk
Some AI applications such as social scoring systems or manipulative systems potentially leading to harm are outlawed completely. -
High Risk
High-risk applications include services directly affecting citizens’ lives (e.g., evaluating creditworthiness or educational opportunities, applications applied to critical infrastructure). They will have to be put through strict assessment regimes before they can be put on the market. Businesses need to consider whether their existing or planned AI application might be considered “high risk”. The EU will update and expand this list on a regular basis. -
Limited Risk
Other AI applications still carry obligations with them, such as disclosing that a user interacted with an AI system. Best practices related to data quality and fairness are essential even in this risk regime. Some examples are image and video processing, recommender systems, and chatbots. -
Minimal Risk
Applications such as spam filtering or video games are deemed to carry a minimal risk and as such they are not subject to further regulatory requirements
How is AI defined?
The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to cover an incredibly wide range of data analysis techniques. That means the EU will consider not just deep learning and complex applications such as self-driving cars as AI. The proposed definition is so broad that many of the technologies used by your business today will fall under it and be regulated:
‘Artificial Intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.*
*OECD definition of AI
Compliance Requirements
The Act lays out a range of requirements for high risk AI systems from the design, implementation and post-market entry phases. These include:
- Risk Management System (Article 9)
- Data and Data Governance (Article 10)
- Technical Documentation (Article 11 and Annex IV)
- Record Keeping (Article 12)
- Transparency and provision of information to user (Article 13)
- Human Oversight (Article 14)
- Accuracy, Robustness and Cybersecurity (Article 15)
- Quality Management System (Article 17)
While limited risk systems will not face the same compliance scrutiny including conformity assessments and product safety reviews, they will also be evaluated under these categories.
Conformity Assessments in the EU AI Act
High Risk AI Systems will have to undergo a Conformity Assessment (Article 19) to demonstrate adherence to the AI Act before being placed on the market in the EU. You are required to generate and collect the documentation and evidence for such an Assessment.
Timeline
The AI Act has passed the EU Parliament and is now in the Trilogue stage between the Parliament, Commission and Council. After a second vote in the Parliament, there will be a two year transition period similar to GDPR for the implementation of the Act’s requirements.
-
April 2021
EU Commission releases full proposed EU AI Act.
-
August 2021
Public consultation period ended.
-
December 2021
Negotiations in the EU Parliament started.
-
February 2022
French EU Presidency published compromise draft.
-
April 2022
Deadline for MEPs to submit amendments.
-
June 2022
Deadline for MEPs to submit amendments.
-
April 2023
EU lawmakers reached a political agreement.
-
May 2023
Minor technical adjustments to the EU AI Act are possible.
-
June 2023
EU Parliament Plenary Vote.
-
July 2023
The first operational trilogue.
-
2023
Final vote before the end of the year.
-
2025?
Penalties for non compliant begin
Penalties of the EU AI Act
Previously a three-tiered approach, the latest amendments of the European Parliament introduced a four-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR.
Non-compliance with prohibitions
up to 40M€
or 7% of turnover
Non-compliance with data and data governance and transparency requirements
up to 20M€
or 4% of turnover
Non-compliance with other obligations
up to 10M€
or 2% of turnover
Supplying incorrect, incomplete, or misleading information
up to 500K€
or 1% of turnover
The EU AI Act Map
Our EU AI Act Map will guide you through the complexities of the new regulation. It helps you to answer key questions about the Act and determine whether your AI system falls under the high-risk category.
FAQ about EU AI Act
What is the EU AI Act ?
Who will be affected by the EU AI Act?
How are companies based in Switzerland impacted by the EU AI Act?
What is the timeline for the EU AI Act?
How can companies be ready for the EU AI Act?
Global AI Regulation
Global AI Regulation
European Union
The EU AI Act aims to set global AI regulations much like GDPR did. With extraterritorial reach and a risk-based approach, it targets protecting citizen and consumer rights. Penalties could reach 7% of global turnover.
The Act is near finalization, expected by early 2024.
- You can read more about the European Parliament’s position on the AI Act by visiting this official document.
Act Now
Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.