As the financial implications of the COVID-19 crisis are starting to unfold, banks and other lenders are under increased scrutiny. The public, press and governments are blaming these financial institutions for taking advantage of the increase in requests for credit. There are many things credit lenders can do to provide better service to hurting households and businesses. , from improving their digital experience as they’re sheltering in place, to offering different loan routes. In this post, we want to address one aspect of credit rsk management: The biases that can be inherent to AI-based credit risk modeling.

Inherent biases in AI-base modeling

There are two main reasons why AI-based models are more prone to inherent biases than simple models:

  1. The black box: In a simple model, like a scorecard, we know what the weight of each feature in the model is. In AI/ML based models, it’s much more difficult to see how much each feature affected the model. This of course brought on the demand to increase model explain-ability, which is often a real challenge for risk managers.
  2. Use of historical data: You know that saying “history always repeats itself? Well, I don’t buy it. Any statistical model that only relies on historical data cannot give you accurate predictions. This is true for most industries, and doubly true when it comes to credit risk management. Using historical data (and by historical I mean over 6 months old) hurts your bottom line and your stakeholders. It also causes a built-in bias against specific sectors. Using old data, especially during unprecedented times or following rare events (global pandemic, anyone?), can make you leave money on the table, and refuse loans to good customers. It can also give you the green light to lend money to those who will not be able to pay it back, deepening their financial troubles.

Overcoming biases, increasing accuracy

When we talk about ethical credit risk management, there are several components that we need to think about. We need to think about what goes into the model – are there any unethical types of information used in the model. We need to think about transparency – within the organization, and to the borrower. At BeeEye, we wanted to allow the highest level of explain-ability, so we integrated Shapley Values into our credit risk modeling platform. This way, modelers can easily track the weight given to each feature in the probability of default calculator. They are able to reflect this internally for compliance purposes, as well as to the borrower. They can also ensure the right types of data are used.

In the next months, lenders will be at the center of the economic recovery, which also means they’ll be getting a lot of public attention. Their lending practices and considerations will be constantly reviewed. Ensuring the right data goes into your models, and being mindful of potential biases is key to making the right decisions for your organization and your customer. Model explain-ability will be more critical to this than ever before.