NIST AI Risk Management Framework: the four functions explained
The NIST AI Risk Management Framework (NIST AI RMF) is the most widely adopted voluntary framework for managing AI risk. It was published by the US National Institute of Standards and Technology in January 2023, with a Generative AI Profile added in July 2024. This post explains what it is, how the four core functions actually work, and how it fits with ISO/IEC 42001 and the EU AI Act.
Is NIST AI RMF mandatory? No. The NIST AI Risk Management Framework is voluntary. There is no certification, no enforcement mechanism, and no regulator issues penalties for non-compliance. Organizations adopt it because it gives structure to AI governance and because it is the most practical way to operationalize the mandatory risk-management obligations in the EU AI Act and ISO 42001.
You cannot manage AI risk without a framework. You can write policies, hire an ethics board, run a few red-team exercises, but without a structure for identifying, assessing and mitigating risk across the full AI lifecycle, you end up with theatre. Ad-hoc governance always loses to the first real incident.
Three frameworks matter in practice. The NIST AI Risk Management Framework (voluntary, US-origin, widest industry mindshare). ISO/IEC 23894 (voluntary, international, the risk-management companion to ISO/IEC 42001). And the EU AI Act (mandatory, Article 9 specifically, if you sell into Europe).
This post focuses on the NIST AI RMF because it is the practical spine: well-specified, framework-agnostic, and the most common way organizations operationalize both the EU AI Act's Article 9 obligation and ISO 42001's risk-management clauses. Modulos is a member of the NIST AI Safety Institute Consortium (AISIC, now the Center for AI Standards and Innovation) and uses the AI RMF as the content backbone of its risk management module.
What the NIST AI Risk Management Framework actually is
NIST published the AI Risk Management Framework (AI RMF 1.0) in January 2023 after an open, multi-year consultation. In July 2024 NIST released the Generative AI Profile, which extends the framework to generative AI and adds twelve categories of GenAI-specific risk (hallucination, CBRN uplift, data poisoning, emergent capability, and others).
The framework is voluntary. That is a feature: it gives organizations a vocabulary and a structure without forcing a particular implementation, which is why it travels well across industries and jurisdictions.
The NIST AI RMF is organized around four core functions: Govern, Map, Measure, Manage. A common misreading is to treat these as four sequential stages. They are not. Govern is cross-cutting: it sits around the other three and is continuously active. Map, Measure and Manage are the operational cycle.
The four functions
Govern. The policies, roles, accountability structures and workforce practices that make risk management real. This is where you set the tone: who owns AI risk, how decisions get made, what documentation is required, how the workforce is trained, how incidents get reported. Govern is the function that determines whether Map, Measure and Manage actually happen or are just a compliance checkbox.
Map. Establish the context for a given AI system. What is it for? Who uses it? What are the intended outcomes and the foreseeable misuse? What data goes in, what decisions come out, what populations are affected? Map produces the shared understanding of the system that everything else depends on. Skip Map and you will Measure the wrong things.
Measure. Quantitative and qualitative assessment of the risks identified in Map. Accuracy, robustness, reliability, fairness, privacy, security, explainability. This includes pre-deployment testing, ongoing monitoring metrics, and tools for tracking changes over time. Measure is where engineering meets governance.
Manage. Prioritize risks by severity and tractability, choose responses (mitigate, transfer, accept, avoid), communicate to stakeholders, and monitor whether the responses are working. Manage is where you make hard calls: which risks are worth which mitigation costs, and which residual risks you can accept.
The framework comes with a companion Playbook and a Crosswalk document that maps the NIST AI RMF to other frameworks including ISO/IEC 23894 and the OECD AI Principles. If you already have an ISO 27001 or ISO 42001 programme, the Crosswalk is the fastest way to see where work can be reused.
How the NIST AI RMF fits with ISO and the EU AI Act
NIST is not a competitor to ISO or the EU AI Act. It is the practical tool for doing what they require.
EU AI Act Article 9 mandates a "risk management system" across the full lifecycle of any high-risk AI system. The Act does not prescribe how. The NIST AI RMF is the most widely adopted way to operationalize this obligation. The Map and Measure functions map almost directly onto Article 9's requirements to identify foreseeable risks, estimate and evaluate them, and adopt suitable mitigation measures. Using the NIST AI RMF as your internal method does not make you automatically Article 9 compliant, but it gives you the evidence trail a notified body will want to see.
ISO/IEC 42001, the AI management system standard, requires clauses 6 (planning) and 8 (operation) to include AI risk treatment. Its Annex B controls reference risk management throughout. The NIST AI RMF slots cleanly underneath ISO 42001 as the risk-treatment engine inside the management system. See our ISO 27001 and ISO 42001 integration piece for how this stacks.
ISO/IEC 23894 is closer to NIST in purpose: both are AI risk management frameworks. 23894 leans more formal (ISO 31000-derived), NIST leans more operational. Organizations pursuing ISO 42001 certification often use 23894 formally and the NIST AI RMF as the working tool.
A one-line summary: if you want to understand AI risk, read NIST. If you want to be certified, implement ISO 42001. If you want to sell into the EU, comply with Article 9. Do all three by building one programme, not three.
For a fuller comparison see AI governance frameworks compared.
What good implementation looks like
A few things that distinguish serious AI risk management from paperwork.
Risk is measured on real systems, not templates. The evidence that matters is test results on your actual model against your actual data, monitored in production. A risk register with no monitoring is not risk management.
Govern drives accountability to a named person. Not a team, not a committee. A person with the authority to pause deployment.
Risks are tracked through the lifecycle, not just at launch. Model drift, data drift, changing threat models, new prompts, new users. Most AI risk materializes months after launch, which means your Measure function has to keep running.
GenAI gets its own treatment. The GenAI Profile exists because generative systems have risks (hallucination, prompt injection, CBRN uplift, emergent capability) that the base RMF does not address in detail. If you are deploying LLM-based systems, treat the GenAI Profile as mandatory reading.
Integration, not duplication. If you are already running ISO 27001 for information security, most of your governance structure, audit programme and evidence base can be reused. The mistake is building a separate AI risk programme in parallel. See our framework integration piece for the mechanics.
Where Modulos fits
The Modulos AI Governance Platform implements the NIST AI RMF as a first-class framework alongside the EU AI Act and ISO 42001. Controls, evidence and risk entries map across all three, so work done to satisfy the NIST Govern function also counts toward ISO 42001 clause 5 and EU AI Act Article 17. The NIST AI RMF Playbook actions are wired into the platform as concrete Controls with implementation guidance, and the GenAI Profile risks are included by default.
Modulos has been an active member of the NIST AI Safety Institute Consortium (AISIC, now part of CAISI) since April 2024, contributing to the development of guidelines and evaluation methods that feed back into the platform.
Frequently asked questions about the NIST AI RMF
What is the NIST AI RMF? The NIST AI Risk Management Framework is a voluntary, lifecycle-based framework for managing risks associated with AI systems. It defines four core functions (Govern, Map, Measure, Manage) and was published in January 2023, with a Generative AI Profile added in July 2024.
Is the NIST AI RMF mandatory? No. It is voluntary and there is no certification. Organizations adopt it because it is the most practical way to operationalize the mandatory risk-management obligations in the EU AI Act (Article 9) and ISO/IEC 42001.
What are the four functions of the NIST AI RMF? Govern (cross-cutting accountability, policies, culture), Map (context, risks, impact), Measure (quantitative and qualitative assessment), Manage (prioritization, response, monitoring).
Does the NIST AI RMF provide certification? No. It is a framework, not a certifiable standard. For certification against an auditable standard, organizations use ISO/IEC 42001.
Conclusion
Ad-hoc AI governance fails the first time a system misbehaves in a way no one anticipated. A framework gives you the structure to anticipate better, measure more rigorously, and respond faster. The NIST AI Risk Management Framework is the most pragmatic starting point: voluntary, well-specified, and designed to integrate with the regulatory frameworks you will eventually have to meet anyway. Start there. Extend to ISO 42001 if you want certification. Add EU AI Act Article 9 compliance on top if you sell into Europe. One programme, three outputs.
If you want to see how the NIST AI RMF runs inside a working governance platform, book a demo.