Back to Blog
AI RegulationsMarch 30, 2026

CBUAE AI Guidance Note: What Financial Institutions Need Now

7 min read
CBUAE AI Guidance Note: What Financial Institutions Need Now

The CBUAE's new AI guidance note tells every licensed financial institution in the UAE exactly what supervisors expect to see. The institutions that get there first will define the standard.


If you're running AI in a UAE-licensed financial institution, something changed on 11 February 2026. The Central Bank of the UAE published its Guidance Note on Consumer Protection and Responsible Adoption and Use of AI and ML by Licensed Financial Institutions. It covers banks, insurers, and every other entity the CBUAE licenses.

The word "guidance" appears throughout. This document tells you, in operational detail, what your regulator considers good AI governance. When the supervisors walk through your door, this is the checklist they'll be holding.

What the CBUAE now expects you to produce

Here's what the guidance expects, distilled to the operational essentials.

A complete AI inventory with risk classification for every system. Not a rough list of vendor tools, but a documented register containing model name, purpose, and risk rating for every AI system you develop or deploy. The guidance says institutions should create processes to rate the risk of each AI system they use. If you can't answer "how many AI systems are we running and which ones matter most?" you can't comply with anything else in this document.

Board-level accountability for AI outcomes. The guidance goes well beyond awareness or quarterly briefings. It makes senior management and boards responsible and accountable for AI systems and outcomes, including model selection, deployment, resourcing, and ongoing monitoring. One sentence deserves particular attention: institutions should not employ AI models that they have no control over. Read that against the reality of how most institutions adopt AI today. How many tools were purchased by a business unit without risk or compliance involvement? How many use foundation models whose internals are opaque? That sentence makes all of it a governance problem.

A human oversight model that matches the risk. The guidance defines three tiers explicitly. Human-in-the-loop, where AI recommends but a human decides. Human-on-the-loop, where AI operates autonomously on routine tasks while humans monitor and intervene when needed. Human-out-of-the-loop, where AI runs without direct involvement, permitted only for low-risk, non-material processes with controls in place. Your institution needs to map every AI system to one of these three tiers and justify the choice based on consumer risk.

A kill switch for every AI system. Section 6(f) requires institutions to retain the clear and immediate ability, with human intervention, to cease use of any AI model, system, technology, or application, not within a reasonable timeframe but immediately. Institutions that can demonstrate this capability will stand out in supervisory interactions.

Periodic bias testing. At minimum annually, or whenever a model is upgraded, materially changed, or newly introduced. No AI system should be deployed or continue operating if it produces discriminatory outcomes.

Plain-language disclosures in Arabic and English. With telephone support available in all major languages of the UAE. Consumers must be informed when they're interacting with AI, particularly for high-impact decisions. And they should be offered the ability to opt out.

Third-party audit rights in every AI vendor contract. If you outsource AI, you remain responsible for it. The guidance expects documented due diligence, contractual audit rights, data protection provisions, performance guarantees, and the ability to terminate. Annual cybersecurity reviews by independent third parties. Pre-deployment testing. Documented justification for why you chose that vendor over alternatives.

That's a comprehensive list. The institutions that build this capability first will set the standard for the sector.

Why "guidance" doesn't mean "optional"

If you've operated under CBUAE supervision for any length of time, you know the pattern. Guidance notes establish expectations. Supervisory examinations test against those expectations. Institutions that demonstrate alignment get smoother reviews. Institutions that don't face harder questions, remediation requests, and reputational exposure.

The timeline isn't specified because it doesn't need to be. The guidance is effective now. Your next supervisory interaction will reflect it.

And the CBUAE isn't operating in isolation. This document explicitly references the UAE Charter for the Development & Use of AI, published July 2024, and the National AI Strategy. It builds on the existing Model Management Standards that already govern model risk at UAE financial institutions. This is one piece of a coordinated national posture on AI governance, not a standalone initiative.

The GCC is converging

The CBUAE joins the Qatar Central Bank, which published its own AI Guideline for licensed financial institutions in September 2024, in setting explicit supervisory expectations for AI governance. Qatar's version is notably stronger in some respects: it requires prior QCB approval before deploying high-risk AI systems and mandates annual reporting on all AI systems to the central bank.

The direction across the GCC is now clear. When two central banks in the region publish AI-specific guidance within 18 months of each other, converging on the same governance primitives (AI inventories, board accountability, human oversight, bias testing), SAMA, CBK, and CBB are watching closely.

If you operate across the GCC, treat this as a regional consensus forming, not a single-jurisdiction curiosity. The specific requirements will vary, but the direction will not.

For institutions also operating under EU frameworks: the overlap is real, but concentrated

One piece of good news for institutions that serve EU markets or already comply with European regulation. We mapped the CBUAE guidance against the EU AI Act, DORA, and NIS2 at the control level in the Modulos platform. The results are precise and worth understanding.

Half of the CBUAE's app-level controls are shared with the EU AI Act. These are the core AI governance primitives: fairness and bias testing, explainability, human oversight, logging, data governance, and continuous monitoring. If you've built EU AI Act compliance infrastructure, that work carries directly across.

The overlap with DORA and NIS2 is narrower than you might expect: only three shared controls each, concentrated in third-party assurance and incident management. This tells you something important about the nature of the CBUAE guidance. It is an AI governance regulation, not a cybersecurity regulation. Its requirements live in a domain that DORA and NIS2 barely touch. Institutions that have invested only in operational resilience and cyber compliance still have the full AI governance layer ahead of them.

The total efficiency gain is real but precise: across all four frameworks, control sharing reduces total assignments by roughly 23%, concentrated in the places where it matters most.

Comparison table showing AI governance controls with checkmarks across multiple columns on a light background

What we've done

At Modulos, we've mapped the CBUAE AI Guidance Note as a framework in our platform. It joins our existing support for the EU AI Act, ISO 42001, NIST AI RMF, OWASP Top 10 for LLMs and Agentic AI, NIS2, and DORA.

Each requirement maps to specific controls. Each control links to evidence collection and continuous monitoring. For institutions already governing AI under other frameworks, shared controls carry across, so the work done once counts everywhere.

The institutions that move first will move fastest

The CBUAE published the exam questions but deliberately left the exam date open. That gives institutions time to build alignment while the language is still "guidance" rather than "requirement."

The institutions that use that time will have governance infrastructure in place when supervisory expectations harden. The ones that wait will be retrofitting under pressure, with auditors watching.

In financial services, the label always changes, and the requirements rarely get softer.


The CBUAE AI Guidance Note is now available in the Modulos platform. To see how it maps to your existing compliance infrastructure, request a demo.

Ready to Transform Your AI Governance?

Discover how Modulos can help your organization build compliant and trustworthy AI systems.