Constraining Machine Learning Credit Decision Models

- Explainability:model drivers are transparent to users and provide actionable conclusions for customers declined credit; and
- Generalization:models do not overfit the training data and perform well on new (production) data.
Many jurisdictions require lenders to explain how and why they declined an applicant for credit, stipulating lenders provide Adverse Action Codes that indicate the main reasons why they were declined. Correct explanations as to why a model’s prediction led a lender to decline credit makes the ML models transparent (there is no “black-box” vagueness as to the drivers of model prediction) and actionable (the customer’s declined credit has clear and tangible actions as to what steps they can take to improve their prospects of gaining credit). As a concrete example of explainability, if the feature in a model with the most negative impact to a declined loan applicant is “number of credit searches in the last six months” then the Adverse Action Code could be “number of credit searches in the last six months is too high.” This provides transparency of the main driver and clear action to the clients that to improve their creditworthiness they need to reduce their credit searches. Applicants can more easily become aware of the factors that are holding them back from better scores and improve their creditworthiness.
Transparency further assures the lenders that credit decisions are based on explainable and defendable features and do not use protected attributes such as gender, religion, or ethnicity.
Many explainability methods exist to help interpret drivers of complex models, but two have gained popularity:
- Local Interpretable Model-Agnostic Explanations (LIME)
- SHapley Additive exPlanation (SHAP)

The first observation is that the pattern is non-monotonic: as the Feature1 values increase the creditworthiness improves, until it is predicted to deteriorate.
The first action needed is to enforce monotonic constraints, which impose model predictions to monotonically increase or decrease with respect to a feature when all other features are unchanged. In the example above, higher values of Feature1 would correspond to better creditworthiness. Departures from monotonicity (which can frequently occur when monotonic feature constraints are not applied) seldom represent a genuine pattern but instead can indicate an overfit of the in-sample relationship, thereby reducing model generalization.
Applying monotonic constraints is not enough for the SHAP values to be used to return Adverse Action Codes. In fact, features can be correlated to some degree: when features interact with each other in an ML model, the prediction cannot be expressed as the sum of features effects, because the effect of one feature depends on the value of some others.
The following SHAP dependence plot shows how the effect of Feature1 depends on the effect of Feature2: the interaction between Feature1 and Feature2 shows up as a distinct vertical pattern of colouring.

The second action that needs to be taken is to enforce interaction constraints, which allow isolation of the behaviour by the model of each feature independent of every other feature, providing a clear picture of how an individual feature predicts risk: as a result, a model prediction corresponds to the sum of each individual effect.
When both monotonic and interaction constraints are applied, SHAP values can be used to return Adverse Action Codes (some additional benefits include quicker training processes, better model generalization, and easier to interpret feature importance calculations). The following SHAP dependence plot shows the effect of Feature1 to the model prediction after both constraints have been applied: it can be noticed that there is a monotonic, one-to-one relationship between the feature values and the SHAP values.
