The EU Artificial Intelligence Act is setting the global standard for AI as GDPR did for data privacy. We give you a brief overview about the Act and how you can get ready.
The EU AI Act introduces a risk-based classification scheme for AI applications. The main criterion is the level of risk posed by the AI application to individuals or society as a whole. The classification ranges from minimal risk to applications which are banned entirely.
How is AI defined?
The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to cover an incredibly wide range of data analysis techniques. That means the EU will consider not just deep learning and complex applications such as self-driving cars as AI. The proposed definition is so broad that many of the technologies used by your business today will fall under it and be regulated:
‘Artificial Intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.*
*OECD definition of AI
The Act lays out a range of requirements for high risk AI systems from the design, implementation and post-market entry phases. These include:
- Risk Management System (Article 9)
- Data and Data Governance (Article 10)
- Technical Documentation (Article 11 and Annex IV)
- Record Keeping (Article 12)
- Transparency and provision of information to user (Article 13)
- Human Oversight (Article 14)
- Accuracy, Robustness and Cybersecurity (Article 15)
- Quality Management System (Article 17)
While limited risk systems will not face the same compliance scrutiny including conformity assessments and product safety reviews, they will also be evaluated under these categories.
Conformity Assessments in the EU AI Act
High Risk AI Systems will have to undergo a Conformity Assessment (Article 19) to demonstrate adherence to the AI Act before being placed on the market in the EU. You are required to generate and collect the documentation and evidence for such an Assessment.
The AI Act has passed the EU Parliament and is now in the Trilogue stage between the Parliament, Commission and Council. After a second vote in the Parliament, there will be a two year transition period similar to GDPR for the implementation of the Act’s requirements.
Penalties of the EU AI Act
Previously a three-tiered approach, the latest amendments of the European Parliament introduced a four-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR.
The EU AI Act Map
Our EU AI Act Map will guide you through the complexities of the new regulation. It helps you to answer key questions about the Act and determine whether your AI system falls under the high-risk category.
FAQ about EU AI Act
What is the EU AI Act ?
Who will be affected by the EU AI Act?
How are companies based in Switzerland impacted by the EU AI Act?
What is the timeline for the EU AI Act?
How can companies be ready for the EU AI Act?
Global AI Regulation
Global AI Regulation
The EU AI Act aims to set global AI regulations much like GDPR did. With extraterritorial reach and a risk-based approach, it targets protecting citizen and consumer rights. Penalties could reach 7% of global turnover.
The Act is near finalization, expected by early 2024.
- You can read more about the European Parliament’s position on the AI Act by visiting this official document.
Whether you are already using or considering AI in your business, keeping these upcoming regulatory requirements in mind is going to be vital to avoid delays and penalties. Use Modulos to ensure that AI models are trained transparently on high-quality data the Act requires.