Back to Blog
August 20, 2025

EU AI Act vs GDPR: key differences every business needs to know

By Modulos5 min read
EU AI Act vs GDPR: key differences every business needs to know

The EU AI Act and GDPR get mentioned in the same breath, usually by people who have read neither. They solve different problems, through different mechanisms, with different obligations. Assuming one covers the other is a compliance failure waiting to happen.

The core distinction is structural. GDPR is a data protection regulation. It governs how personal data flows, who can access it, and what individuals must be told. The EU AI Act is a product safety regulation, built on the same template as medical devices and CE marking: classify the risk, pass a conformity assessment for some systems, submit to post-market surveillance. Everything else downstream follows from that.

Scope and applicability

Both are Regulations, directly applicable in all 27 member states without national implementing law. GDPR has applied since May 2018. The AI Act phases in: prohibitions from February 2025, general-purpose AI obligations from August 2025, high-risk obligations from 2 August 2026. The Digital Omnibus currently in trialogue proposes pushing some of those dates (standalone high-risk to December 2027, embedded products to August 2028). Nothing is enacted yet, so don't plan around delays that may not land.

GDPR's scope is defined by the data. If you process personal data of people in the EU, you are in scope, whether you use AI or Excel. The AI Act's scope is defined by the system. If you put an AI system on the EU market, you are in scope, whether it touches personal data or not. A computer vision model inspecting welds on a factory floor falls under the AI Act and not GDPR. A rule-based CRM running on customer records falls under GDPR and not the AI Act. An AI-powered CV screener falls under both.

Risk-based, organised differently

Both regulations are risk-based. They organise risk differently.

GDPR expects you to assess the risk to individual rights through a Data Protection Impact Assessment and scale safeguards accordingly. There are no statutory risk tiers. The calibration is yours, supervised by Data Protection Authorities.

The AI Act bakes the risk categories into the law. Prohibited practices (social scoring by public authorities, manipulative subliminal techniques, untargeted scraping of facial images, emotion recognition in workplaces and schools, real-time remote biometric identification in public spaces, among others) are off-limits. High-risk systems, those listed in Annex III or used as safety components of regulated products, must pass a conformity assessment, maintain technical documentation, run a quality management system, support human oversight, and log events. Limited-risk systems face transparency duties: users know they are interacting with AI, deepfakes get labelled. Minimal-risk systems carry no specific obligations.

Compliance model and enforcement

GDPR is self-assessment with supervisory oversight. You run DPIAs, keep records, appoint a DPO where required, and handle data subject requests. DPAs investigate, audit, and fine. The ceiling is €20M or 4% of global annual turnover.

The AI Act is pre-market conformity assessment plus post-market surveillance. Many high-risk providers self-assess conformity. Others, particularly those already covered by sectoral product legislation or certain biometric systems, need third-party assessment by a notified body before the product reaches the market. After launch, providers monitor performance, report serious incidents, and cooperate with market surveillance authorities. The ceiling is higher: €35M or 7% of global turnover for prohibited-practice violations, €15M or 3% for most other breaches.

The interaction: both at once

Most organisations arrive at this question because they have an AI system that processes personal data. The answer is blunt: you are subject to both. The AI Act asks what risk tier the system occupies and whether it meets the applicable conformity requirements. GDPR asks what risks to data subject rights exist and how they are mitigated. The two run in parallel. Neither substitutes for the other.

A biometric identification system can be fully GDPR-compliant and still prohibited under the AI Act. A fraud detection model can sail through GDPR and still be high-risk under Annex III, which means conformity assessment. Organisations that try to stretch a GDPR programme to cover the AI Act discover quickly that the required artefacts (technical documentation, intended-purpose statements, human-oversight design, logging) simply do not exist in a typical data protection programme.

EU AI Act vs GDPR at a glance

DimensionGDPREU AI Act
Regulatory typeData protectionProduct safety
Scope triggerProcessing of personal dataPlacing or using an AI system on the EU market
Risk frameworkSelf-assessed via DPIAStatutory tiers: prohibited, high-risk, limited, minimal
Compliance modelSelf-assessment with DPA oversightPre-market conformity assessment, notified bodies for some high-risk systems
Key dateIn force since 25 May 2018High-risk obligations from 2 August 2026 (subject to Digital Omnibus)
Maximum fine€20M or 4% of global turnover€35M or 7% of global turnover (prohibited practices)
EnforcementData Protection AuthoritiesNational Market Surveillance Authorities and the AI Office

What to do next

If you put AI systems on the EU market, you need working programmes for both frameworks. GDPR expertise alone will not cover the AI Act. The evidence base is different, the roles are different, the review gates are different. Start by inventorying your AI systems, classifying each under the AI Act's risk tiers, and mapping which ones also fall under GDPR because they process personal data. From there, the conformity assessment path, technical documentation, and post-market monitoring run in parallel with DPIAs and records of processing activities.

For a deeper breakdown of the regulation itself, see the EU AI Act overview. For how governance programmes get built in practice, the guide to AI governance is the right starting point. To see how this gets operationalised across frameworks in one system, the Modulos AI governance platform is built for exactly that.

Ready to Transform Your AI Governance?

Discover how Modulos can help your organization build compliant and trustworthy AI systems.