AI Regulation
is here
The EU Artificial Intelligence Act is setting the global standard for AI as GDPR did for data privacy. We give you a brief overview about the Act and how you can get ready.
Risk-based Classification
The EU AI Act introduces a risk-based classification scheme for AI applications. The main criterion is the level of risk posed by the AI application to individuals or society as a whole. The classification ranges from minimal risk to applications which are banned entirely.
-
Unacceptable Risk
Some AI applications such as social scoring systems or manipulative systems potentially leading to harm are outlawed completely.
-
High Risk
High-risk applications include services directly affecting citizens’ lives (e.g., evaluating creditworthiness or educational opportunities, applications applied to critical infrastructure). They will have to be put through strict assessment regimes before they can be put on the market. Businesses need to consider whether their existing or planned AI application might be considered “high risk”. The EU will update and expand this list on a regular basis.
-
Limited Risk
Other AI applications still carry obligations with them, such as disclosing that a user interacted with an AI system. Best practices related to data quality and fairness are essential even in this risk regime. Some examples are image and video processing, recommender systems, and chatbots.
-
Minimal Risk
Applications such as spam filtering or video games are deemed to carry a minimal risk and as such they are not subject to further regulatory requirements.
How is AI defined?
‘Artificial Intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to cover an incredibly wide range of data analysis techniques. That means the EU will consider not just deep learning and complex applications such as self-driving cars as AI. The proposed definition is so broad that many of the technologies used by your business today will fall under its regulations.
Compliance Requirements
The Act lays out a range of requirements for high risk AI systems from the design, implementation and post-market entry phases. These include:
- Risk Management System
- Data and Data Governance
- Technical Documentation
- Record Keeping
- Transparency and provision of information to user
- Human Oversight
- Accuracy, Robustness and Cybersecurity
- Quality Management System
- Fundamental Rights Impact Assessment
While limited risk systems will not face the same compliance scrutiny including conformity assessments and product safety reviews, they will also be evaluated under these categories.
Conformity Assessments in the EU AI Act
High Risk AI Systems will have to undergo a Conformity Assessment (Article 19) to demonstrate adherence to the AI Act before being placed on the market in the EU. You are required to generate and collect the documentation and evidence for such an Assessment.
Timeline
The AI Act has been passed by the EU Parliament and has reached a political agreement during the Trilogue phase between Parliament, Commission and Council. After a second vote in the Parliament and Council, the Act will go into effect.
-
April 2021
EU Commission releases full proposed EU AI Act.
-
August 2021
Public consultation period ended.
-
December 2021
Negotiations in the EU Parliament started.
-
February 2022
French EU Presidency published compromise draft.
-
April 2022
Deadline for MEPs to submit amendments.
-
June 2022
Deadline for MEPs to submit amendments.
-
April 2023
EU lawmakers reached a political agreement.
-
May 2023
Minor technical adjustments to the EU AI Act are possible.
-
June 2023
EU Parliament Plenary Vote.
-
July 2023
The first operational trilogue.
-
December 2023
Political Agreement on the AI Act.
-
March 2024
The EU AI Act has been approved.
-
2025?
Penalties for non compliant begin.
Penalties of the EU AI Act
The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.
Non-compliance with prohibitions
up to €35M
or 7% of turnover
Non-compliance with other obligations
up to €15M
or 3% of turnover
Supplying incorrect, incomplete, or misleading information
up to €7.5M
or 1.5% of turnover
FAQ about EU AI Act
What is the EU AI Act ?
Who will be affected by the EU AI Act?
How are companies based in Switzerland impacted by the EU AI Act?
What is the timeline for the EU AI Act?
How can companies be ready for the EU AI Act?
Global AI Regulation
Global AI Regulation
European Union
The EU AI Act aims to set global AI regulations much like GDPR did. With extraterritorial reach and a risk-based approach, it targets protecting citizen and consumer rights. Penalties could reach 7% of global turnover.
On December 8, 2023, the European Parliament and the Council of the European Union reached a political agreement on the EU AI Act.
On March 13, 2024, the European Parliament approved the EU AI Act.
- You can read more about the European Parliament’s position on the AI Act by visiting official document of Artifical Intelligence Act
Ensure Your AI Compliance
Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.