U.K. Watchdogs Seek To Root Out Discriminatory AI In Loan Applications

According to the Financial Times, U.K. regulators have indicated that they will begin to clamp down on the use of artificial intelligence (AI) in banking that might be used to discriminate against customers. Banks using AI to approve loan applications must be able to prove the technology will not exacerbate the already present discrimination against minorities.

AI has been a significant growth area in banking of late. According to a Research and Markets report, its worldwide market size is projected to see a meteoric rise from $3.88 billion in 2020 to $64.03 billion in 2030, with a CAGR of 32.6%.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

AI in banking has experienced a marked maturation recently, and as data analysis improves, the potential for more accurate decision-making increases. However, concerns about possible mishandling and misuse have led to intensified regulatory scrutiny. Earlier this month, the Consumer Financial Protection Bureau (CFPB) cautioned that it would be getting tougher on the misuse of AI in banking so as to prevent “digital redlining” and “robo-discrimination.”

Last year across the pond, two U.S. congressional committee chairs requested that regulators ensure that the country’s lenders applied safeguards to ensure that AI improved access to credit for low- and middle-income families and people of color.

Meanwhile, in the EU, regulators urged lawmakers to consider “further analyzing the use of data in AI/machine learning models and potential bias leading to discrimination and exclusion.”

And in the U.K., the Office for AI will release its white paper on governing and regulating AI early this year. This may lead to a shift from the government’s current sector-led approach to comprehensive AI-specific regulations.

There are several risks that come with banks using AI. For instance, the inherent complexity of AI models can create a “black box” problem in which there is very little transparency regarding how they reached their conclusions, which makes accountability and error detection very difficult. Baked-in biases that are a challenge to root out are another risk. For example, in 2019 there were claims that technology used to measure creditworthiness by Apple and Goldman Sachs might be biased against women. AI algorithms that have been trained on incomplete, biased, or extraneous data can produce judgments that are also biased, which can cause inadvertent discrimination.

The solution is simple: Regulators must give clear and consistent guidance and a framework for banks to work within. Banks will need to recognize the inherent flaws in AI, improve transparency, and take responsibility for problems that are created. And both banks and watchdogs must introduce policies to minimize the risk of bias and discrimination. Most importantly, there should be a “human-in-the-loop,” meaning a human should be involved in signing off on outputs from algorithms before they are delivered as advice to customers.