The FINMA Risk Monitor 2023 outlines several expectations and guidelines for supervised institutions in Switzerland regarding the use of artificial intelligence (AI). These expectations are primarily focused on governance, risk management, transparency, and non-discrimination.
Here’s a summary of key points of the FINMA Risk Monitor 2023:
AI Governance and Responsibility
FINMA emphasizes the importance of clear governance structures and responsibilities when using AI. Since AI applications can make or inform decisions, there’s a growing need to ensure that these decisions are controlled and responsibilities are clearly defined, especially in complex processes where in-house expertise might be lacking.
AI Robustness and Reliability
The report acknowledges the potential risks arising from poor data quality and the automatic optimization process in AI, which could lead to model drift. Therefore, institutions are expected to ensure that AI results are accurate, robust, and reliable. This involves critical assessment of data, models, and outcomes.
AI Transparency and Explainability
FINMA highlights the challenge of explicability in AI, where the complex nature of algorithms makes it difficult to understand how specific results are achieved. Institutions must ensure that the application of AI is transparent and its results are explicable, in accordance with the relevance to the recipient and integration into processes.
Non-Discrimination by AI
The use of personal data by AI in risk assessment or service development could lead to distortions or incorrect results for underrepresented groups, potentially causing unintentional discrimination. Institutions are therefore required to avoid unjustified discrimination and consider legal and reputational risks associated with it.
With these requirements, FINMA expects financial institutions to effectively manage the risks associated with AI, ensuring governance, reliability, transparency, and non-discrimination in their AI applications. The authority also plans to monitor the use of AI by supervised institutions and remain engaged with industry and academic stakeholders on developments in this area.
Modulos offers an AI Governance, Risk and Compliance solution which can help organizations jumpstart their AI compliance journey as governments around the world impose new requirements on the use of AI.