Streamlining EU AI Act Compliance with Requirements Scoping

Discover a rule-based requirements scoping methodology for EU AI Act compliance, achieving up to 60% efficiency gains.

Abstract

This article presents a rule-based requirements scoping methodology for EU AI Act compliance, demonstrating quantifiable efficiency gains across different organizational contexts. By connecting structured assessment questionnaires directly to compliance workflows, organizations can target their compliance effort gaining up to 60% efficiency for certain roles while maintaining regulatory adherence. Our analysis shows that high-risk classifications of AI Systems (AIS)  increase compliance workload by a factor of eight, while a General Purpose AI Model (GPAIM) flagged with Systemic Risk needs more than six times more controls than a Free and Open Source GPAIM. These findings provide organizations with evidence-based guidance for optimizing their compliance strategies.

1. Introduction

1.1 Context

As the EU AI Act enforcement unfolds, organizations face the challenge of translating complex regulatory requirements into practical and actionable compliance verification tasks. While basic checklists, cheatsheets and guidelines exist, there is a critical need for structured approaches to concrete and granular implementation guidelines of the requirements across roles, risk tiering, and AI system product classifications.

This article introduces a targeted questionnaire that automatically maps EU AI Act requirements to users’ specific contexts, demonstrating how proper scoping can streamline compliance efforts across those Act’s key dimensions.

Going beyond traditional assessment tools, we establish a direct connection between questionnaire outcomes and compliance workflows – translating user categorizations into precisely filtered sets of requirements and controls.

While this article focuses on reducing the scope of applicable requirements, even the remaining requirements need not be daunting. As recently noted by Lucilla Sioli, head of the European Commission’s AI Office [5]: “Even for high-risk applications, the requirements are not that onerous. Mostly [companies] have to document what they are doing, which is what I think any normal, serious data scientist developing an artificial intelligence application in a high-risk space would actually do.

1.2 Applicability challenges

The implementation of the EU AI Act presents significant challenges that highlight the need for structured compliance approaches. Beyond traditional software regulations, the Act introduces complexities affecting both organizations developing AI systems and the growing ecosystem of third-party vendors providing AI components, models, and services.

Key challenges include:

  • Classification complexity: The Act’s broad definition of AI systems and multi-tiered risk categories create uncertainty in system classification. For example, determining whether a machine learning component qualifies as a General Purpose AI Model (GPAIM) often requires detailed analysis of its potential applications and capabilities. Additionally, organizations must accurately assess their risk tier – a step that is frequently misinterpreted due to overlapping requirements between different use case categories.
  • Role-based requirements: Multiple stakeholder roles (e.g. providers, deployers, importers, distributors) have overlapping obligations that complicate responsibility allocation. Organizations often struggle to determine which requirements apply when they fulfill multiple roles simultaneously. This is particularly complex in vendor relationships, where responsibilities must be clearly delineated across the AI supply chain, including documentation requirements, compliance monitoring, and incident reporting obligations.
  • Implementation gaps: Limited guidance exists for translating legal requirements into technical controls. Organizations face practical challenges in implementing specific requirements, such as how to demonstrate algorithmic transparency, conduct bias detection, and establish effective human oversight mechanisms.

2. Methodology

The methodology consists of the Categorization of the use case, a Questionnaire mapping the User to the categories obtained, and a subsequent Scoping of the Requirements and Controls as per the obtained categories. It was originally developed by our external legal expert Aleksandr Tiulkanov [7], and productionized by the Modulos team.

2.1 EU AI Act categorization

The questionnaire design follows a decision-tree structure that systematically evaluates an organization’s compliance obligations across multiple dimensions. The questionnaire systematically explores:

  • Organizational roles and responsibilities
  • AI system characteristics and capabilities
  • Intended use cases and risk levels
  • Market presence and distribution models

2.2 Assessment questionnaire

The questionnaire comprises seven interconnected sections:

1. Basic classification

  • Scientific research exemption
  • GPAIM evaluation
  • EU establishment/location

2. AI system qualification

  • AI system definition
  • Project scope
  • System purpose evaluation

3. Product integration

  • Annex I product coverage
  • System integration assessment
  • Safety component analysis

4. Risk assessment

  • Annex III use case evaluation
  • High-risk determination
  • Impact analysis

5. Market representation

  • EU establishment status
  • Market presence evaluation
  • Compliance obligations

6. Human interaction

  • User interaction assessment
  • Natural person interaction
  • Special categories (biometrics, emotion recognition)

7. Distribution and integration

  • Supply chain role
  • Integration scenarios
  • High-risk system integration

Each section employs targeted questions with predefined response options, enabling automated requirement mapping while maintaining assessment consistency.

The below figure shows how the questionnaire is presented on Modulos AI Governance Platform, plus how the underlying decision tree flowchart is designed.

Figure 1 – Scoping questionnaire UI (top) and extract of underlying workflow (bottom)

2.3 Scoping 

The scoping framework operates across three primary dimensions, with each EU AI Act requirement mapped to specific categories within these dimensions. This multi-dimensional approach enables precise filtering of applicable requirements at organization and at AI application level. Note that the below categories are not mutually exclusive: an organization role for a given project may fall under the category of “Provider” and “Deployer”, or subject to both “High Risk” and “Transparency” requirements.

Role

  • Providers
  • Distributors
  • Importers
  • Deployers
  • Authorized Representatives

Use case

  • Prohibited
  • High risk
  • Limited risk
  • Transparency requirements

Product

  • Standard AI Systems (AIS)
  • General Purpose AI Models (GPAIM)

Additional qualifiers

  • Origin consideration (e.g. EU vs. non EU)
  • Supply terms (e.g. neither free nor open source)
  • Product properties (e.g. systemic risk)

2.4 Mapping to taxonomy

Each EU AI Act requirement is mapped within Modulos’ AI Governance Taxonomy [3] – a comprehensive ontology that breaks down regulatory frameworks into actionable components. Requirements are decomposed into atomic, reusable Controls that represent concrete implementation tasks. Our so-called Controls are compliance verification units that are framework-agnostic, meaning they can be reused across different regulations while maintaining clear traceability to their source requirements.

Based on the categories determined through the EU AI Act questionnaire, the platform automatically filters relevant Controls, ensuring organizations only implement those matching their specific context. This approach combines precise regulatory compliance with operational efficiency, as Controls completed for one framework can often satisfy requirements across multiple frameworks.

3. Results: Quantitative Analysis of Control Distribution

Once all the Requirements and related Controls are properly mapped to the categories, it’s possible to provide overviews.

The below diagram (Figure 2) visualizes the Controls flow analysis for the EU AI Act, illustrating how they distribute across different stakeholder roles, use cases, and product types. The width of the flows indicates the relative number of Controls applicable in each case.

Figure 2 – Number of controls according to the different EU AI Act categories

This figure clearly shows that Providers and High Risk applications carry the highest compliance burden, while the Controls decrease significantly for other roles and use case risk levels. This visualization helps organizations understand their compliance scope based on their specific context, demonstrating substantial opportunities for efficiency optimization through automated scoping.

Key findings from our analysis show:

  • A Deployer role comes with about 60% less controls than a Provider
  • A High Risk use case comes with > 8 times more controls than being not High Risk
  • A Systemic Risk GPAIM comes with > 6 times as much controls than a Free and Open source GPAIM
  • A GPAIM product comes with 50% less controls than an AIS product

While the number of Controls helps compare compliance workload between categories, the actual effort depends on each Control’s complexity – ranging from simple documentation updates to complex technical implementations. Still, this metric provides a useful proxy for planning your compliance project.

6. Next Steps

6.1 The Modulos Platform – We bridge Regulatory Complexity and Actionable Compliance

Modulos stands as the bridge between ever-evolving regulations and the practical, day-to-day controls that ensure compliance. We work with lead experts to decode and interpret intricate legal documents—like the EU AI Act or ISO 42001—into a control framework that maps onto each phase of the AI lifecycle:

1. Requirements: Topical macro area that can overlap with regulatory broader needs.

2. Control Design: List of controls tailored to data handling, model training, deployment, and monitoring.

3. Lifecycle Implementation: Controls integrated into daily workflows across data science, compliance, and risk management teams.

4. Evidence Tracking & Audit Readiness: Real-time logs and documentation to protect organizations if incidents or lawsuits arise.

By converting complex regulatory language into targeted, actionable steps, Modulos significantly reduces the gap between “what’s required” and “how to do it.” That is a competency that needs a deep expertise and human oversight, and for which automation must stay in the hands of experts. We are the first AI governance platform to have achieved product conformity with ISO/IEC 42001:2023 “Artificial intelligence — Management system”, as delivered by the certification body CertX [6], which attests our expertise.

6.2 Your journey

The Modulos platform is designed to provide immediate value while growing with your compliance needs. Here’s what you can expect:

  1. Initial setup
    • Access to the interactive compliance portal
    • Quick assessment to determine your scope
    • Initial tailored workflow based on your profile
  2. Organization and application integration
    • API connectivity to your evidence sources
    • Customization of compliance workflows
    • Integration with existing tools and processes
  3. Risk management
    • Control-risk mapping
    • Risk identification and mitigation
    • Real-time monitoring dashboards
  4. Operational benefits
    • AI-guided compliance recommendations
    • AI-guided evidence collection
    • Real-time progress tracking
    • Cross-framework content reuse

Organizations can begin their journey by exploring the platform through our free plan [4] or by contacting our team for a personalized onboarding session [4].

6. Conclusion

The automated scoping approach presented here offers organizations a practical path to efficient EU AI Act compliance. By connecting assessment outcomes directly to operational controls, organizations can focus their compliance efforts where they matter most, while maintaining confidence in their regulatory adherence.

About the author

Pierre Oberholzer is a Lead Data Scientist at Modulos AG, currently engaged in developing platforms for responsible artificial intelligence (AI) and inter-banking transactions. He has accumulated about 15 years of experience in the field, working across banking, consulting, and research. Pierre earned a PhD in Electrochemistry from PSI/ETHZ and holds a Master of Science in Mechanical Engineering from EPFL.

References

[3] https://www.modulos.ai/blog/ai-governance-taxonomy-iso-42001-and-beyond/

[4] https://www.modulos.ai/pricing/

[5] https://sciencebusiness.net/news/ai/eu-losing-narrative-battle-over-ai-act-says-un-adviser

[6] https://www.modulos.ai/press-releases/modulos-iso-42001-product-conformity/

[7] Aleksandr Tiulkanov, Independent expert in European AI regulation and standardisation, https://tiulkanov.info/