What is an “AI System”? 

What counts as an “AI system” and how to treat multiple AI models or agents within a product? 
This question is central to emerging AI regulations and standards. Different bodies – from the EU’s AI Act to U.S. frameworks like NIST’s AI Risk Management Framework, and global principles (OECD) and standards (ISO/IEC) – offer definitions of “AI system.” Below we break down these definitions and then examine whether a product using multiple AI models/agents is viewed as one AI system or many. We also highlight divergent interpretations and real-world debates on this point.

Key Definitions of an “AI System”

Let’s go over how major frameworks define “AI System”:

EU AI Act (2024) 

The EU AI Act defines an AI system functionally. Per Article 3(1), an AI system is 

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” 

In simpler terms, any software that makes inferences (predictions, decisions, etc.) from data to meet objectives – possibly learning or adapting over time – can be an AI system. Key traits are: 

  1. some degree of autonomy (operating without constant human guidance), 
  2. possible adaptiveness (learning or changing after deployment), and 
  3. using inputs to produce outputs via inference.

The EU AI Office has published a detailed guidance further expanding on this definition and what kinds of systems should fall under it. 

NIST AI Risk Management Framework (USA, 2023) 

NIST’s definition closely mirrors the emerging consensus. NIST AI RMF 1.0 defines an AI system as: 

“An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” 

This highlights that an AI system is an “engineered” system (emphasizing intentional design) with the ability to autonomously produce outputs (predictions, decisions, etc.) towards given objectives. Notably, NIST’s definition is a bit less explicit about adaptiveness or human involvement than the EU’s, but it captures the same idea of machine-based decision-making.

OECD AI Principles (updated 2024) 

The OECD’s definition has become a global reference and in fact influenced the EU’s wording. The OECD defines an AI system as 

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” 

This is nearly identical to the EU Act’s language (the EU deliberately aligned with OECD terminology). The OECD also notes that “different AI systems vary in their levels of autonomy and adaptiveness after deployment.” In short, the OECD sees AI systems as any AI-driven software that processes inputs to produce meaningful outputs toward objectives, with autonomy/adaptiveness varying by system.

ISO/IEC Standards (ISO 22989:2022) 

The ISO/IEC 22989 standard (AI concepts and terminology) defines “artificial intelligence system (AI system)” as an 

“engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.”. 

This is very similar to the above definitions. ISO adds that an AI system “can use various techniques and approaches… to develop a model… to represent data, knowledge, processes, etc. which can be used to conduct tasks.”. Notably, ISO explicitly defines an “AI component” as a “functional element that constructs an AI system”. In other words, ISO acknowledges that an AI system may be built from multiple components or models – more on this point below.

Common Thread: Across these frameworks, an AI system is essentially software (or an algorithmic system) that uses AI techniques to process inputs and generate outputs (predictions, decisions, recommendations, content) in pursuit of objectives, operating with some autonomy. They all imply that humans set the goals or provide data, but the system can perform reasoning/inference to reach outputs. 

The definitions also converge in listing typical outputs and in recognizing autonomy and adaptiveness as hallmarks of AI (though not always mandatory). This convergence is no accident – the OECD definition has heavily influenced the EU, U.S. (even a 2023 U.S. Executive Order mirrors this language), and ISO, indicating an emerging global consensus on what “AI system” means.

One System or Many? – Multiple AI Models/Agents in a Product

A practical confusion arises when a product or service includes multiple AI models or agents. For example, consider a smartphone app that uses one AI model for face recognition and a different AI model for voice assistance, or an “agentic AI” platform where several AI agents work together. Do regulators treat this as one AI system or multiple? The answer can depend on context, and current definitions leave some room for interpretation. Below we examine how major frameworks and experts approach this:

EU AI Act Perspective

The EU AI Act’s focus is on AI systems as end-use applications (especially high-risk systems), while also introducing the concept of AI models. Importantly, the Act differentiates between an “AI system” and an “AI model.” An AI model (especially a general-purpose AI model) is basically the algorithm or trained model itself, which “does not constitute an AI system by itself” . In EU parlance, a model becomes part of an AI system when it’s integrated into a final application. For instance, GPT-4 as a large language model is not an “AI system” on its own under the Act; but a chatbot app built on GPT-4 (model + interface + specified use) is an AI system . This means if a product has multiple AI models serving different functions, each model alone isn’t regulated – it’s the system(s) built from them that fall under the Act.

  • If those models operate independently for different objectives, you might effectively have multiple AI systems within one product. Each would need to be assessed against the Act’s requirements. For example, a hypothetical smart fridge with an AI vision system for inventory and a separate AI voice assistant could be seen as two AI systems in one device (each evaluated for risk and compliance).
  • If the models are combined into one functional system (e.g. an ensemble of models that together produce one outcome), then it can be viewed as a single AI system with internal AI components. The Act doesn’t explicitly spell out how to delineate system boundaries in such cases, which has been noted as a point of ambiguity by commentators. In fact, early guidelines on the AI Act’s definition did not give a clear rule on when something is “one” AI system versus multiple, leading to compliance questions . Providers are advised to consider the system’s “intended purpose” (as defined by the Act) – if the product has distinct intended purposes enabled by AI, that hints at multiple systems.

ISO/IEC View (AI Components)

ISO explicitly acknowledges that an AI system can be constructed from multiple AI components or agents. In ISO 22989 terms, you could say an AI system might contain several AI models (components) working together. This engineering perspective suggests that multiple AI modules can collectively form one larger AI system, especially if they are designed to cooperate for a common objective. 

For instance, a multi-module AI pipeline (data processing module, ML model module, decision module) would be considered one system with various components. This aligns with how system engineers view complex AI solutions (sometimes termed “compound AI systems” or “composite AI”): a single AI solution may involve an ensemble of models or multi-step algorithms, but it’s delivered as one system .

NIST and OECD

Neither NIST nor OECD explicitly spell out how to count multiple models, but their definitions don’t forbid a system comprising multiple AI models. Given they define an AI system by its functionality and outputs, one could infer that if multiple AI models act together towards one overall outcome, they collectively constitute one AI system. 

For example, an AI-powered lending platform might use separate models for credit scoring and for fraud detection; regulators would likely evaluate each model’s function (perhaps as separate AI systems serving different purposes in the workflow). On the other hand, a complex AI service (like an autonomous vehicle’s AI) that includes vision, decision, and control models would probably be seen as one integrated AI system – albeit a very complex one – since all components serve the single objective of autonomous driving. 

The OECD in describing AI systems even notes they can take a variety of forms and implementations, implying that the internal complexity (number of models) doesn’t change the fact it’s one “system” if it functions as one.

“Agentic” or Multi-Agent AI

In recent AI trends, products like Lindy or multi-agent platforms consist of several AI agents working in tandem. Industry commentary refers to these as single AI systems made up of multiple agents. For example, Meta’s AI policy director described “agentic AI” as “an AI system composed of multiple AI agents that can act autonomously to complete tasks.”

This suggests that even when AI agents are modular, people may consider the whole assembly as one system – especially if offered as one product or service. However, each agent might be assessed for specific risks. Regulators have not yet issued formal rules on multi-agent systems specifically, but they would likely apply the same logic: look at the overall functionality and risk of the combined system, while also considering the roles of each component/agent.

Real-World Interpretation & Debates

Legal and compliance experts are actively discussing this “one vs many” issue. A key challenge is drawing boundaries around AI systems. Modern AI applications often combine multiple models working together, and it can be unclear whether to treat that as one regulated system or several . The European Commission’s non-binding guidelines (Feb 2025) on the AI definition aim to help, but as some commentators note, they “offer no clear rules” on composite systems. 

In practice, companies may need to document and assess each AI-powered functionality separately to be safe. For instance, a company might fill out an AI Act compliance checklist “for each individual AI system used in your organisation” – implying if you have two AI subsystems, you’d consider them separately. On the other hand, if those subsystems are tightly integrated (and especially if only sold/deployed together), one might argue they form one “AI system” for regulatory purposes. Until enforcement or case law sheds light, organizations are advised to err on the side of caution by identifying all significant AI components in a product and ensuring each meets applicable requirements.

Divergences and Clear Explanations

Varying Scope and Emphasis

While there is broad alignment in definitions, subtle differences exist. The EU and OECD stress autonomy and adaptiveness, explicitly including the idea of systems that can learn or change after deployment. ISO and NIST implicitly include those ideas but emphasize the engineering aspect (ISO uses the term “engineered system” and lists outputs and objectives, NIST says “engineered or machine-based system”). The role of humans is another subtle point: the EU/OECD definitions mention “human-defined objectives” (or inputs), underscoring that humans set the goals or provide data. NIST’s definition did not explicitly mention humans, though it’s implied by “objectives” being given. All frameworks agree that AI systems can range from fully automated to decision-support tools with a human in the loop, but none of these definitions exclude systems with multiple sub-models.

EU’s Unique Distinction – AI Model vs AI System

One divergence is the EU AI Act’s introduction of General-Purpose AI (GPAI) models as a regulated category separate from AI systems. A “GPAI model” is defined in the Act (e.g. a large language model) and has its own obligations . The Act clarifies that “an AI model is an essential part of an AI system but does not constitute an AI system itself” . This means the EU is, in effect, regulating certain large models at the model level (to ensure things like transparency, risk mitigation by the model provider) and regulating AI systems at the application level. 

Other frameworks (OECD, NIST, ISO) do not separate “model” as a legal category – they focus only on systems. This divergence can confuse terminology: under the EU approach, if your product has two AI models inside, you have one or more AI systems depending on integration, and if either model is a general-purpose model, the model itself may attract certain obligations. For non-general AI models, the Act leaves them largely to the control of the system provider. 

Thus, the EU framework is a bit more complex: AI systems (end uses) vs. general models (core technology).

Multiple Models in One Product – Different Interpretations

Most regulations are technology-neutral about system architecture. They don’t explicitly say “an AI system may contain multiple models” in the legal text, so one must interpret. The ISO concept of an AI component clearly supports one-system-with-multiple-parts. 

The OECD and EU definitions, by mentioning “a system…infers…from input to generate output,” neither forbid nor explicitly address multiple inference steps or models. It comes down to how you define the “system’s” scope. If two AI models are tightly coupled (one’s output is another’s input, in a pipeline for one overall task), regulators would likely treat the pipeline as one AI system. 

If the models serve distinct purposes (even if in one software package), they might be counted as separate systems. There is room for divergent interpretation, hence experts debate where to draw the line. For example, a LinkedIn analysis pointed out that the EU’s guidance doesn’t resolve “how to define boundaries when models from different providers work together in one AI solution.” The lack of clarity can lead to compliance headaches, as companies wonder if they must, say, document each model separately or the product as a whole.

Real-World Example

Consider a social media platform’s AI. Meta’s system cards note that a single AI system (say, the content recommendation engine) “has multiple models that identify content and predict how likely a person is to interact with it.” . Meta treats this as one AI system (the recommender system) composed of multiple cooperating models. 

This aligns with the notion of a compound AI system where “multiple AI components work together, each potentially producing its own output” toward a final result . So in practice, companies often speak of one AI system comprising many models. Regulators, in turn, will likely require that each significant model’s impact is accounted for within the system’s risk management. Spain’s data protection authority (AEPD) articulated that “an AI system is made up of different elements: interfaces, sensors, communications… and at least one AI model”, often “configured by the combination of several algorithms.” . All those algorithms (models) should be evaluated as part of the AI system’s impact. This suggests a holistic view: treat the collection of models as one system for assessment, rather than splitting hairs for each sub-model.

Bottom Line

If a product uses multiple AI models or agents, whether it’s considered one or multiple AI systems depends on how integrated their functions are and how regulations are applied. Generally:

  • If the models together enable one overarching functionality or use-case, regulators and standards bodies would treat it as one AI system (with internal components). You would ensure the system as a whole meets requirements (e.g. transparency, safety, fairness), and document the roles of each component. This is supported by definitions that focus on the system’s objectives and outputs rather than its internal count of models.
  • If the models serve distinctly different purposes (different use-cases or user-facing functions), then you are effectively dealing with multiple AI systems, even if they are packaged in one product. Each would need consideration under the law. For instance, under the EU Act, one might be high-risk and the other not, or they might fall under different risk categories in Annex III – you would handle their compliance separately.

Because official guidance is still evolving, interpretations can diverge. The European Commission’s AI Office may release further clarification as use-cases emerge, and ultimately courts or enforcement practices will solidify these concepts. For now, the divergence in interpretation is mostly about where to draw the boundary around an “AI system.” 

All definitions agree on what an AI system does; the open question is mostly practical: is it one system or many? – which hinges on how tightly the AI components are integrated in serving a common goal.

Sources