A Taxonomy of AI Systems and Models in the EU AI Act

icon eu ai act

The final version of the EU AI Act has changed significantly over the past year to capture the demands on various types of systems to be regulated at different levels. We can break down this complexity by differentiating whether something is an “AI System” vs a “Model”, and then assessing whether it is inside the scope of various Titles of the AI Act.

Graphic by Aleksandr Tiulkanov

In Scope

Specific Purpose AI Systems: Prohibited (T.II)

Description: Artificial Intelligence systems that are banned under the AI Act because of unacceptable risks, such as those violating fundamental rights.

Examples:

  • Social scoring by governments that could lead to discrimination.
  • Real-time biometric identification systems such as facial recognition in publicly accessible spaces for law enforcement purposes, barring specific exceptions.

Reference: EU AI Act, Title II, Chapter 2 outlines the prohibitions of certain AI practices.

Specific Purpose AI Systems: High-Risk (T.III)

Description: AI systems that are not prohibited but could pose high risks to safety, livelihoods, or fundamental rights.

Examples:

  • AI for critical infrastructure management, like power grid control systems.
  • Recruitment AI used to screen job applicants.

Reference: EU AI Act, Title III, Chapter 3 details the regulatory framework for high-risk AI systems.

AI Systems with Transparency Obligations (T.IV)

Description: AI systems that must disclose their AI-driven nature to users, ensuring transparency.

Examples:

  • Chatbots, where it must be clear that the user is interacting with an AI.
  • Deepfakes or AI-generated content where there is an obligation to disclose the artificial nature of the content.

Reference: EU AI Act, Title III, Chapter 2 specifies transparency obligations for certain AI systems.

General Purpose AI Systems (GPAIS)

Description: AI systems with a broad application range, including those in the T.III and T.IV categories once applied in a specific context.

Examples:

  • AI platforms used for various services like language translation, recommendation systems, etc.
  • Cloud-based AI services that can be tailored for different business analytics purposes.

Reference: While the European Union Artificial Intelligence Act may not specifically define “General Purpose AI Systems,” their definition is mentioned in Recital 60d.

Out of Scope

GPAIS released under FOSS licenses

Description: AI systems that are open-source and freely available, which may encourage innovation and collaboration.

Examples:

  • Machine learning libraries like TensorFlow or PyTorch.
  • Open-source AI projects on platforms such as GitHub.

Reference: The act implicitly supports open-source initiatives by excluding them from certain restrictions, as indicated in the recitals and exemptions.

Specific Purpose AI Systems: Military, Defence, or National Security

Description: Application of AI systems specifically for defense purposes, exempt due to national security considerations.

Examples:

  • Autonomous drones used by the military.
  • Cybersecurity defense mechanisms employing AI for national infrastructure.

Reference: EU AI Act, Title II, Chapter 1 may provide exemptions for national security applications.

AI Systems deployed for Personal Non-Professional Use

Description: Systems intended for individual use in a non-commercial, non-professional capacity.

Examples:

  • Personal AI assistants like voice-activated smart home devices.
  • AI-powered fitness or health-tracking applications for personal use.

Reference: EU AI Act, Title I, provides the scope of the act and implicitly excludes personal, non-professional use.

Specific Purpose AI Systems: Minimal Risk

Description: AI applications considered to pose negligible risk to rights or safety.

Examples:

  • AI-enabled spam filters.
  • AI-driven video game Non-Player Characters (NPCs).

Reference: EU AI Act, Title III, would not specifically address minimal risk AI systems, considering them outside the high-risk category.

AI Systems and Models: used solely for Scientific Research and Development

Description: Systems and models used strictly for scientific or academic research purposes.

Examples:

  • AI used in a laboratory setting for drug discovery.
  • AI models used in academic research for theoretical advancements in machine learning.

Reference: EU AI Act, Title III, Chapter 4 may provide conditions for R&D activities, exempting them from some regulatory constraints.

AI Systems and Models used by 3rd Country Public Authorities or Int’l Orgs

Description: AI used by international organizations or in cooperation agreements, likely with safeguards in place.

Examples:

  • AI systems used by the United Nations for international development programs.
  • AI applications in international law enforcement collaborations.

Reference: EU AI Act, Title II, might address exemptions for international cooperation, emphasizing the need for safeguards.

AI Models Categories

General Purpose AI Models with Systemic Risk (GPAIM-SR)

Description: Broad-use AI models that carry systemic risks and therefore might require more stringent oversight.

Examples:

  • AI models used in financial systems affecting stock markets.
  • AI algorithms used in high impact social media platforms that could influence public opinion or election outcomes.

Reference: While not specifically detailed in the EU AI Act, systemic risk considerations would align with the act’s overarching goals in Title III.

General Purpose AI Models (GPAIM)

Description: General AI models that may be adapted for various applications and subject to standard regulation.

Examples:

  • Pre-trained machine learning models for image recognition available for various applications.
  • Language models used for applications ranging from writing aids to coding assistants.

Reference: These models would be regulated according to their application in a high-risk context as per EU AI Act, Title III.

Navigating the AI Regulatory Landscape

As the European Commission, the Council, and the European Parliament bring AI Act into force, leaders in the realm of artificial intelligence, such as Chief Technology Officers (CTOs) and Heads of Analytics, must navigate a new regulatory landscape. This pivotal legislation reshapes how AI is developed, deployed, and managed, requiring a proactive approach to compliance and governance.

Audit and Classification Compliance

For these leaders, the first step is a thorough audit of existing and planned AI systems against the AI Act’s classifications. Understanding where each system falls within the taxonomy is critical. This isn’t just a box-checking exercise; it’s about genuinely understanding the risk profile and impact of the AI systems under your stewardship.

Addressing High-Risk Applications

Prohibited and high-risk applications demand immediate attention. If your organization is involved with AI systems that fall into these categories, you will need to work closely with legal and compliance teams to ensure that your AI applications adhere to the stringent requirements set forth by the Act. This may involve significant changes to your operations processes, data handling, and transparency measures.

Enhancing AI Transparency

For AI systems classified under transparency obligations, you must consider how you will communicate with end-users. It’s not just about being open about the use of AI; you’ll need to explain how these systems impact the user’s experience and decisions being made. This is where user experience (UX) teams and legal experts must collaborate to create clear communications.

Adopting Best Practices

Even if your AI systems are deemed low risk or fall out of scope of the Act, it’s wise to stay informed about the evolving regulatory environment. The AI Act is likely just the beginning of a global trend towards more stringent AI regulation. Adopting best practices now, such as those recommended by NIST and ISO standards, can future-proof your organization against upcoming changes.

Furthermore, for AI systems that are open-source or for personal non-professional use, you should still maintain a level of due diligence. Encouraging best practices in documentation and ethical AI use among developers and users can help promote a culture of responsibility that goes beyond compliance.

Global AI Considerations

It’s also essential to consider the international aspects of AI development and deployment. If your systems are used in cooperation with third-country public authorities or international organizations, understand the safeguards that must be in place. The global nature of AI technology means that cross-border considerations will become increasingly complex and important.

AI Literacy Advantage

Finally, investing in AI literacy across your organization is crucial. The AI Act implicitly endorses the need for a knowledgeable workforce, and ensuring that your team understands both the capabilities and limitations of AI will be a competitive advantage.