AI governance is a hot topic, yet there’s often a lack of actionable solutions beyond discussions. With a flood of offerings from non-practitioners, it can be overwhelming for organizations to find clear, practical paths to AI governance compliance with standards like ISO 42001 and the EU AI Act.
At Modulos, we address this with our AI Governance Platform, which guides users through the AI compliance process. Our research shows that up to 50% of Controls can be reused when moving from ISO 42001 AI governance to EU AI Act compliance, significantly streamlining the process. In this article, we introduce the AI governance taxonomy at the core of our platform.
A New AI Landscape: Navigating AI Governance
The landscape of AI governance has rapidly evolved, introducing new challenges for organizations striving to comply with various frameworks. From legal standards to technology and business needs, navigating the complexities of AI governance is essential for ensuring ethical, transparent, and compliant AI systems.
Let’s explore the different dimensions of this landscape.
Legal Side
In recent years, new AI regulatory frameworks, such as the EU AI Act, have emerged to regulate the growing influence of artificial intelligence (AI) systems. The EU AI Act became enforceable in August 2024, though the standardization efforts led by CEN/CENELEC are still pending, creating uncertainty about specific compliance requirements.
At the same time, ISO 42001—published in December 2023—has gained traction as a recognized standard for AI governance. It is the first auditable standard providing a structured framework for managing AI risks.
In contrast, the United States lacks a harmonized legal framework similar to the EU AI Act. However, the NIST AI RMF has become a primary reference, offering guidance for managing AI-related risks. These frameworks aim to mitigate the potential harms that AI technologies can cause—whether to users, consumers, organizations, or society at large—while encouraging innovation.
Technology Side
On the technology front, AI applications—especially with the rise of Generative AI—bring vast benefits and specific risks, including bias, misinformation, and lack of transparency. After over a decade of AI adoption, traditional AI is scaling, becoming critical to business operations. With this expansion come challenges: organizations must address the high risks associated with AI integration, ensuring alignment with business goals and maintaining control over AI use.
This rapidly evolving AI landscape calls for structured approaches to AI governance and risk assessment, ensuring that AI can deliver its benefits without creating unintended harm.
Business Side
Businesses need to harness AI’s potential for growth while ensuring compliance and managing risks. To succeed, companies must adopt clear strategies that align AI initiatives with their goals.
Navigating the complex AI regulatory framework requires balancing innovation with responsibility, ensuring that AI systems meet regulatory compliance requirements without hindering operational efficiency or time-to-market.
Additionally, fostering customer trust, maintaining agility, and managing sector-specific compliance and data governance requirements are critical for competitive success. Structured and effective governance approaches will help businesses stay competitive while maintaining agility.
A Rising Gap in AI Governance
Organizations today face a complex landscape of AI governance frameworks and standards, each with its own regulatory requirements. While attempts have been made to map these frameworks to one another, a lack of alignment leaves many organizations unsure of where to begin. Companies struggle to understand how to comply with multiple regulations and standards simultaneously.
This creates an urgent need for cross-framework AI compliance, helping organizations streamline their efforts. Without a unified approach, businesses are left with fragmented actions and no clear roadmap.
This gap exists because regulatory bodies and AI developers often have different perspectives. Regulators focus on long-term societal risks, while developers prioritize innovation and shorter-term goals.
This disconnect leads to several challenges like:
- Efficiency loss and risk of non-compliance: The complexity of navigating multiple frameworks without clear guidance results in confusion and increased risk of non-compliance. Organizations may waste significant time and resources trying to understand overlapping or conflicting requirements.
- Opportunity loss because of hesitation in AI adoption: Many companies are hesitant to adopt AI as they are unsure how to manage compliance risks. Legal concerns and the perceived high cost of meeting regulatory obligations cause delays, which results in missed opportunities for innovation and competitive advantage.
Platforms like Modulos are essential to address this. Modulos AI Governance Platform provides structured, reusable Controls that simplify compliance across multiple frameworks, reducing complexity and risk.
At the core of our platform is content—organized in a way that is understandable and actionable for users. This content allows businesses to align with regulations without getting lost in the legal intricacies, enabling them to adopt AI responsibly and efficiently.
Modulos’ AI Governance Taxonomy
We believe that content taxonomies structured through ontologies pave the way for the widespread adoption of AI governance by enhancing understanding and implementation across various standards.
At the core of Modulos’ platform is a centralized repository that captures and organizes relevant regulatory frameworks and standards, including ISO 42001, the EU AI Act, and the NIST AI RMF. This structured AI governance taxonomy enables organizations to easily manage compliance across frameworks within one unified system.
Key aspects of our taxonomy include:
- Framework units: Original regulations, standards, ethical frameworks, or internal company policies.
- Categorized by requirements: We extract topical areas fully connected to the original regulatory texts and even consider framework-specific requirements. This ensures that every requirement is accurately reflected in our platform, allowing users to grasp the bigger picture without getting lost in legal complexity.
- Atomic structure: Each requirement is broken down into small, actionable tasks called Controls. Those allow practitioners to focus on concrete actions instead of sifting through complex regulatory language. It simplifies the process and makes it easier to implement compliance measures.
- Organized by goals: To simplify navigation, Controls are grouped by responsible goals such as performance, transparency, fairness, or security. This goal-based structure helps users quickly locate relevant Controls based on their specific needs or challenges.
- Clear mapping to requirements: Each control in the taxonomy can be traced back to the specific regulatory or framework requirements it addresses. This facilitates a top-down audit by collecting all Controls, related evidence, and risks when scanning by chapter or article numbers from the original regulatory text.
- Cross-framework reusability: One of the key strengths of our platform is the reusability of Controls across different frameworks. For example, once control is fulfilled under ISO 42001, it can be applied to the EU AI Act or NIST AI RMF where relevant (see next paragraph), reducing duplication of effort and saving valuable time.
- Automated gap analysis: Our taxonomy also provides automated gap analysis, identifying areas where compliance may be lacking across multiple frameworks. This helps organizations pinpoint missing controls and take proactive steps to achieve full compliance, minimize risk, and avoid unnecessary manual work.
Our taxonomy goes beyond merely categorizing Controls. It is built on an ontology of terms, properties, and dependencies, ensuring that all elements are logically connected and reflect their relationships. It facilitates automated gap analysis, identifying areas where compliance may be lacking. It also provides a structured foundation for workflows related to compliance management, enabling organizations to adopt AI responsibly and efficiently, paving the way for the future of AI governance.
Our AI Governance Controls – Actionable and Reusable Components
Modulos’ platform centers around actionable, reusable Controls—practical steps organizations can take to comply with regulatory requirements. While regulations often use legal language, our Controls are designed for technology practitioners, allowing them to implement compliance measures without the complexity.
Here’s how we tailor our Controls:
- Practitioner-oriented: Our Controls abstract higher-level regulatory language, focusing instead on what practitioners must do. This allows technology teams to concentrate on the actions required without needing to translate legal texts into technical steps.
- Atomic unit of work: Each control is broken down into its smallest actionable form, making it easier for practitioners to organize their work and compartmentalize results. This structure enhances understanding through well-scoped tasks and better contextualization, thereby streamlining execution and task management.
- Framework-agnostic and reusable: The Controls are written in a way that makes them applicable across different regulatory frameworks. This means that a control fulfilled under one framework, such as ISO 42001, can easily be reused under another, like the EU AI Act or NIST AI RMF. This cross-framework reusability reduces duplication of effort and streamlines compliance processes. Additionally, it offers a degree of reuse across different AI system initiatives, further enhancing efficiency and consistency.
Let’s take a look at an example of one of our reusable Controls that applies to multiple frameworks:
- Code: MCF-35
- Name: Data Bias Prevention and Mitigation
- Description: Document the measures taken to manage undesired bias and their outcome.
This control addresses a common requirement across ISO 42001, the EU AI Act, and NIST RMF, making it applicable regardless of which framework the organization is working with. By documenting how bias is managed, organizations not only fulfill a compliance requirement but may also contribute to responsible AI practices defined by other frameworks not mentioned in this figure.
By focusing on actionable, reusable Controls, Modulos ensures that organizations can efficiently meet compliance obligations while staying agile and adaptable as new regulations emerge. Note that as a pre-certified ISO 42001 platform, Modulos allows users to apply our Controls to maximize their chances of being audit-ready for this standard.
Efficiency Gains Through Control Reuse
For any AI system, organizations must fulfill numerous compliance Controls. This can be time-consuming and repetitive without a structured approach, particularly when operating under multiple frameworks and AI systems. At Modulos, we employ a top-down approach, breaking down overarching frameworks into actionable Controls that can be reused across frameworks.
Based on our analysis, we can demonstrate how many Controls are shared across various frameworks. The following image illustrates the overlap between three different frameworks of choice – ISO 42001, EU AI Act, and NIST RMF 1.0 – in terms of shared Controls, as introduced in the previous chapter.
This cross-framework alignment of Controls means that, for any AI system under scrutiny across two or more regulatory frameworks, a significant portion of the Controls do not need to be re-addressed once they have been fulfilled under one framework, as for their related artifacts and risks. By reusing Controls, organizations can reduce redundancy, save time, and streamline their compliance efforts.
Let’s consider an AI system that has already been audited against ISO 42001, but the organization now wants to prepare for compliance with the EU AI Act. Here’s how control reuse can significantly reduce the workload:
- Base work: 96 Controls to be completed by the Organization for ISO 42001
- No reuse: 128 Controls to be completed by the Organization for EU AI Act
- With reuse: 63 Controls to be completed by the Organization for EU AI Act, saving the effort of completing 65 Controls
→ Organizations adopting ISO 42001 this way will save 50% time (rough assumption) when moving to EU AI Act
The same principle applies to organizations expanding into other frameworks, such as moving from ISO 42001 to NIST RMF. The efficiency gains from control reuse make compliance faster, less resource-intensive, and more manageable.
→ Organizations adopting ISO 42001 this way make themselves very close to fulfilling NIST AI RMF expectations as well.
Conclusion
Navigating the complex world of AI governance and regulatory compliance can be challenging. However, by leveraging structured, reusable Controls, Modulos’ AI Governance Platform offers a practical solution that simplifies compliance, reduces redundancy, and saves time. Our framework-agnostic approach allows organizations to address AI regulatory requirements efficiently while minimizing risk and ensuring they are prepared for future standards.
If you want to streamline your AI compliance process and stay ahead of regulatory changes, it’s time to explore Modulos AI Governance Platform. Contact us today for a demo and see how our reusable Controls can help your organization meet its AI governance and compliance needs with ease.
We will continually update our Taxonomy over time as we incorporate experience from our partners, and we will add new frameworks as we analyze them. If you want the latest mapping in your inbox, subscribe to our newsletter here.
About the author
Pierre Oberholzer is a Lead Data Scientist at Modulos AG, currently engaged in developing platforms for responsible artificial intelligence (AI) and inter-banking transactions. He has accumulated about 15 years of experience in the field, working across banking, consulting, and research. Pierre earned a PhD in Electrochemistry from PSI/ETHZ and holds a Master of Science in Mechanical Engineering from EPFL.