Fairness in credit risk: a Data-Centric AI approach

Written by Elena Maran

Credit risk management is a crucial part of financial institutions’ activities. Across several business areas, from corporate to retail, to private banking, the assessment of the risks involved with a potential customer’s default is of paramount importance to allow for healthy and sustainable growth.

Financial institutions need to balance risk and reward, mindful of internal policies and regulatory requirements.

The benefits of using Artificial intelligence in the credit risk process are undeniable, leading to a more granular approach, which results in a better overall distribution of risks.

Artificial intelligence can allow shifting from a single transaction-based process to a more holistic view of the overall risk position and exposure which improves cost/income ratios, cost of capital, and risk-weighted assets distribution. Furthermore, it can allow to proactively monitor risk as new information becomes available.

However, credit decisions supported by AI can only be as good as the data available to the financial institution are.

Traditional credit risk management involves the use of both qualitative and quantitative inputs, coming from several internal and external sources, and ultimately, it requires human judgment and overall conformity to the internal policies and risk appetite of the financial institution, as well as the observance of regulatory requirements.

As such, historical data can carry different types of biases, sometimes human-driven (consciously or unconsciously), sometimes simply derived from historical market conditions or prior internal policies.

Bias in historical data

As an example, data on personal loan exposures and defaults available to a bank may have contained fewer observations relative to female individuals, only because in the past the access to the job market was far lower for women than men. As such, women would not have qualified for loan eligibility, lacking job income as a source of repayment.

Equally, a bank’s internal policy may have required loans to be granted exclusively to national residents and not to foreigners.

As market conditions and policies change, the data available to a bank may no longer be representative, and when used as an input to an AI model, produce undesired outcomes. The results can translate into biased and unfair decisions (with respect to some protected attributes such as gender, race, age) which can even be deemed to be discriminatory.

This can have severe consequences, in terms of commercial, reputational, and legal risk for the financial institution.

Furthermore, the upcoming European regulation for AI (EU AI Act), has identified credit risk scoring as one of the so-called “high-risk AI systems”, which are subject to strict requirements, amongst which fairness.

The pyramid of the criticality of AI systems

Consequently, financial institutions will need to pay particular attention to the fairness of AI-driven credit processes, to avoid hefty fines (the highest of 6% of the global turnover or EUR 30 million) on top of the above-mentioned risks.

At Modulos, we have analyzed a specific use case aiming to assess customers/prospects’ eligibility for personal loans.

Our objective is to minimize gender bias, whilst preserving the required level of accuracy of credit decisions.

The below video interview featuring our CEO, Kevin Schawinski, and our financial services specialist, Elena Maran, explains how the Data-Centric AI approach is a powerful tool to address existing unbalances in data, and produce fair, yet accurate credit decisions.

Fairness and accuracy result in credit scoring with a Data-Centric AI approach

We demonstrate how the approach, through its feedback loop between data and model, can overcome limitations and shortcomings of traditional approaches to bias mitigation, simplifying regulatory compliance, and achieving the bank’s objectives faster and more efficiently.


Video interview with Elena Maran

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

ICAgICAgICA8ZmlndXJlIGNsYXNzPSJlbWJlZC1jb250YWluZXIiPgogICAgICAgICAgICA8aWZyYW1lIHRpdGxlPSJNb2R1bG9zICNkYXRhY2VudHJpY2FpIFVzZSBDYXNlOiBGYWlybmVzcyBpbiBDcmVkaXQgUmlzayBNTCBNb2RlbCIgd2lkdGg9IjUwMCIgaGVpZ2h0PSIyODEiIHNyYz0iaHR0cHM6Ly93d3cueW91dHViZS1ub2Nvb2tpZS5jb20vZW1iZWQvbjl3MjJVSHRpaUU/ZmVhdHVyZT1vZW1iZWQmbW9kZXN0YnJhbmRpbmc9MSZzaG93aW5mbz0wJnJlbD0wJmNvbnRyb2xzPTEiIGZyYW1lYm9yZGVyPSIwIiBhbGxvdz0iZnVsbHNjcmVlbjsgYWNjZWxlcm9tZXRlcjsgYXV0b3BsYXk7IGNsaXBib2FyZC13cml0ZTsgZW5jcnlwdGVkLW1lZGlhOyBneXJvc2NvcGU7IHBpY3R1cmUtaW4tcGljdHVyZTsgd2ViLXNoYXJlIiByZWZlcnJlcnBvbGljeT0ic3RyaWN0LW9yaWdpbi13aGVuLWNyb3NzLW9yaWdpbiIgYWxsb3dmdWxsc2NyZWVuPjwvaWZyYW1lPiAgICAgICAgICAgICAgICAgICAgPC9maWd1cmU+CiAgICAgICAg