Back to Blog
February 9, 2024

AI systems under the EU AI Act: definitions, models and scope explained

By Modulos6 min read
AI systems under the EU AI Act: definitions, models and scope explained

Updated April 2026

The EU AI Act has been in force since August 2024. The prohibitions kicked in February 2025, GPAI obligations in August 2025, and the high-risk regime is on track for August 2026 (with the Digital Omnibus currently in trialogue proposing a push to December 2027 for standalone high-risk systems). If you are trying to figure out whether the Act applies to something you build, buy or deploy, the first question is almost always the same: is this thing an "AI system" in the legal sense?

The answer is less obvious than it looks, because "AI" in everyday use lumps together at least three different things: models, systems built on models, and applications built on systems. The Act regulates one of these directly, touches another, and largely ignores the third. Getting the taxonomy right is the difference between a coherent compliance program and a panic.

The legal definition

Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

That definition is deliberately broad and deliberately aligned with the OECD. The key operative word is infers. Rule-based software that does not learn from data and does not generalize is not in scope. A statistical model that does is.

Model, system, application

The clearest way to see how the Act bites is to separate three layers.

Model. The raw mathematical object: weights plus architecture. GPT-4, Llama 3, Claude, Gemini, Mistral Large. On its own a model does nothing. It needs an inference stack, an interface and an objective.

System. A model wrapped in infrastructure that turns inputs into outputs for a purpose. ChatGPT is a system built on GPT-4. Copilot is a system built on GPT-4 tuned for code. A hospital triage tool built on a proprietary model is a system.

Application. The deployment of a system in a concrete context: a bank using an LLM-powered system to draft customer emails, a hospital using a triage system in its ED, a recruiter using a CV-screening system.

The AI Act regulates systems. It imposes obligations on providers (who build and place systems on the market) and deployers (who use them in a professional capacity). It carves out a separate regime for general-purpose AI models under Articles 51-55, with transparency and systemic-risk obligations aimed at the model layer. It does not regulate applications directly but regulates systems by reference to the applications they enable: a CV-screening system is high-risk because of how it will be used, not because of its architecture.

The taxonomy matters because your obligations flow from which layer you operate at. A GPAI provider has one set of duties. A deployer of a high-risk system has a different set. A company fine-tuning a foundation model for internal use may be both.

This three-layer split is a refinement of Aleksandr Tiulkanov's classification diagram, which remains the clearest visual map of how these concepts nest.

What counts as "inference"

The Commission's February 2025 guidance on the AI system definition clarified a few edge cases worth knowing.

Expert systems and pure rule-based logic are out of scope: they do not infer. Classical statistical methods (linear regression, basic descriptive statistics) are out of scope unless they are being used to learn patterns and generate outputs that influence decisions in the way ML systems do. Optimisation solvers and search algorithms are generally out. Physics simulations are out.

In scope: supervised and unsupervised ML, reinforcement learning, deep learning, foundation models, and any hybrid that relies meaningfully on learned components. If your system updates its behaviour based on data, or generates outputs that were not pre-specified by a human programmer, assume it is in scope and work backwards.

General-purpose AI models

GPAI is the category added late in the trilogue to deal with foundation models. A GPAI model is one that displays significant generality and can competently perform a wide range of distinct tasks. Article 51 sets a compute threshold (10^25 FLOPs for training) above which a model is presumed to pose systemic risk and triggers the full Article 55 obligations: model evaluation, adversarial testing, serious-incident reporting, cybersecurity.

Below that threshold, GPAI providers still have baseline Article 53 duties: technical documentation, copyright compliance, a public training-data summary.

Downstream providers who build systems on top of a GPAI model inherit some obligations but not all. The model provider is on the hook for the model card; the system provider is on the hook for the system.

Why the distinction matters in practice

Three concrete consequences.

First, scope. If you deploy ChatGPT Enterprise to draft internal memos, you are a deployer of a general-purpose AI system but not of a high-risk one. Your obligations are mostly Article 50 transparency (tell users they are interacting with AI) and Article 4 AI literacy. That is a light regime.

Second, substitution. If you swap the underlying model (GPT-4 to Claude to a local Llama) without changing what the system does or how it is used, your obligations as a deployer do not change. The Act cares about purpose, not plumbing.

Third, stacking. If you build a CV-screening product on top of a GPAI model, you are both a downstream provider of a high-risk system (Annex III point 4) and a deployer of a GPAI model. You own the conformity assessment for the system. The model provider owns the model card.

Where to go next

If you want the first-principles version of what an AI system actually is (as distinct from what the Act says it is), read our longer piece on what is an AI system. If you are trying to work out whether your system is high-risk, start with EU AI Act risk categories. For the full current state of play including the Digital Omnibus, see EU AI Act summary 2026.