Home » Technical Topics » Knowledge Engineering

How you can explain Machine Learning models ?

Machine Learning (ML) models are increasingly being used to augment human decision making process in domains such as finance, telecommunication, healthcare, and others. In most of the cases, users do not understand how these models make predictions. The lack of understanding makes it difficult for policy makers to justify their decisions. Most of the ML models are black boxes that do not explain on its own why it reached a specific recommendation or a decision. This forces the users to say that “the algorithm made me do it”. It is simple to explain linear regression models, but their accuracy is low. For neural networks the accuracy is high, but they are hard to explain. To build trust with stakeholders, decision makers must learn techniques for interpreting and explaining the models. Let us examine one such technique in this blog.

Surrogate models can help explain machine learning models of medium to high complexity. They are simpler models that can be used to explain a more complex model. They are assumed to be indicative of the internal mechanisms of the complex model and are not able to perfectly represent the underlying response function, nor are they capable of capturing the complex feature relationships. They help the users to understand the trends in the prediction outputs given out by the model, with variations in selected attributes from the set of independent variables. From the past real-world experience, the users will have clear expectation about the model outputs when the inputs are varied in a particular fashion. These input-output relations are captured in surrogate models. With simple plots of input-output relations generated from the surrogate models, you can easily explain the response of the models to selected atributes in a specific range. To fully explain the model, you need to train multiple surrogate models by selecting one or more inputs from a set of important attributes of the model.

Training a surrogate model is the easiest method of interpreting the behavior of an existing machine learning model. To train a surrogate model, you don’t need to know anything about the production model and you may see it as a black box. It has an input data, and when we pass it, we get an output. Following are the essential necessities for training a surrogate model:
– An existing machine learning model.
– Input data that can be processed by the existing model. This can be real-world data from the production environment.

Follow these steps:
1. Pass the data (independent variable) into the black box model and get the prediction value.
2. Train the surrogate model, using the independent variables from input data and the prediction from the black box as the dependent variable.
3. Calculate the prediction error of the surrogate model and compare it with the predictions of the black box. The smaller the error, the better the surrogate model explains the black box.

When we get a surrogate model which has an acceptable prediction error, we can look at its parameters to understand which features are important and how the black box model works. Since the surrogate models are trained only on the predictions of the black box model instead of the real outcome, they can only interpret the model, and not the real data.

The globally interpretable attributes of a simple model are used to explain global attributes of a more complex model. Like global surrogate models, local surrogate models are simple models of complex models, but they are trained only for certain, interesting rows of data(for instance, the best customers in a dataset or most-likely-to-fail pieces of equipment according to some model’s predictions). Global surrogate cares about explaining the whole logic of the model, while local surrogate is only interested in understanding predictions restricted with a limited range of input variables.

See you next time ………

Janardhanan PS
Machine Learning Evangelist