Credit Risk

Visualizing XAI in Credit Risk

Visualizing XAI in Credit Risk

February 02, 2021 | Matthew Turner

 AI models hold the promise of better solutions in risk decisions. But how do risk managers and compliance officers vet AI solutions for statistical and industry soundness and regulatory compliance? Our customers told us they want to visualize explainable AI (XAI) in action. They would like to see specific things like:

  • how the model works 
  • what goes into the model
  • what drives the model’s prediction 
  • what trends in the data are captured 
  • what reason codes are returned

We listened and developed a visualization tool for one of our explainable AI solutions, NeuroDecision® – a neural network model for any risk decision application. Risk managers and compliance officers can now delve into the model predictions and explanations to ensure a solution is robust and compliant. 

Let me show you.

The visualization tool is built with a NeuroDecision model on credit data for demonstration purposes. A self-guided tutorial in the tool takes you through the workings of a NeuroDecision model to show you how attributes are combined to form signals within the neural network. These signals are then used to generate an output prediction that captures non-linearities and interactions in the data. This makes it a superior choice over traditional logistic regression. 

 

 

NeuroDecision is unique and regulatory compliant because the model rewards positive behavior and penalizes negative behavior. This is covered in the tutorial and allows risk managers and compliance officers to visualize how predictions are made in a logical and compliant manner.

After the self- guided tour, risk managers and compliance officers can deep dive into a few more pages to understand the driving forces in the model and overlay their expertise to cross-examine the model. These pages expose the model for experts to probe – making AI no longer a black box of understanding. 

Want to learn more about  Machine Learning and Explainable AI in Credit Risk?  Check out this Q&A. 

Attribute Impact

This part of the demo looks at how each input attribute affects the model when applied to a sample. The chart below shows whether an attribute positively or negatively affects the scores generated by the model, and by how much, on a standardized scale. For example, the occurrence of historical 60+ days past due occurrences on a consumer’s credit file is the number one negative effect on the model. This is logical because most consumers will not have 60+ days past due occurrences. However, when they do, it has a large negative impact on their score.  

 

   

 

 

Reason Code Distribution

The next page in the visualization shows the distribution of reason codes that would be returned to consumers on a sample portfolio. Reason codes are required by regulation and inform a consumer why they received the score they did and not a better score. Regulation requires that the top four reason codes be returned for each consumer. 

The chart below shows each attribute in the model, and the percentage of the population that received that attribute as one of the top four reason codes. For example, the percentage of a consumer’s current balance to original loan amount on installment loans is returned for 75% of consumers in this portfolio. This is logical because a key differentiator between most consumers who don’t have substantial derogatory information on their credit file would be how successful they have been in paying down debt.

This whitepaper helps explain how important it is to use the right machine learning technique to be able to deliver the right reason codes to individual consumers.   
 

 

 

Point Swing by Attribute

The final page in the demonstration shows the maximum number of points that can be lost in scoring for each model attribute. Point swing is calculated by finding the difference between the maximum score that can be achieved at the best value of an attribute and the minimum score that can be achieved at the worst value of the same attribute. 

This information can be used to evaluate the potential impact of an attribute on score changes. For example, a consumer who moves from no unpaid collection amounts to a large unpaid balance on collection accounts can see their score drop up to a 381-point drops.  

 

   

 

Dive Deeper

This visualization tool allows anyone to explore explainable AI through NeuroDecision on a demonstration model. Specifically, how the model makes regulatory compliant predictions and what important factors are driving the model. Risk and compliance managers can dive deeper and there is an opportunity to examine real models using NeuroDecision. They can explore data sources leveraged in the model along with underlying raw data trends, attribute trends captured by the model, and model performance and stability statistics.

Matthew Turner

Matthew Turner

Principal Mathematical Statistician

Matthew is a Ph.D., Fellow and principal mathematical statistician within Equifax Data and Analytics. He leads a team of data scientists, mathematical statisticians and developers to build Next Generation Decisioning algorithms that leverage big data machine learning technologies for decisioning applications.