Subscribe to DSC Newsletter

A taxonomy of explainable (XAI) AI models

I am reading a very interesting paper called Principles and Practice of Explainable Machine Learning by

Vaishak Belle (University of Edinburgh & Alan Turing Institute) and Ioannis Papantonis (University of Edinburgh) which presents a taxonomy of explainable AI (XAI).

 

XAI is a complex subject and as far as I can see, I have not yet seen a taxonomy of XAI. Hence, I hope you will find thus useful (paper link below)

 

The paper itself is detailed and I will explain a small section here adapted from the paper

 

For opaque models, types of post-hoc explanations include(adapted from the paper):

 

  • Text explanations produce explainable representations utilizing symbols, such as natural language text.
  • Visual explanation aim at generating visualizations that facilitate the understanding of a model.
  • Local explanations attempt to explain how a model operates in a certain area of interest.
  • Explanations by example extract representative instances from the training dataset in order to demonstrate how the model operates.  
  • Explanations by simplification refer to the techniques that approximate an opaque model using a simpler one, which is easier to interpret
  • Feature relevance explanations attempt to explain a model’s decision by quantifying the influence of each input variable.

 

Model-agnostic Explainability Approaches are designed to be flexible and do not depend on the intrinsic architecture of a model(such as Random forest). These approaches solely relate the inputs to the outputs. Model agnistic approaches could be  explanation by simplification, explanation by feature relevance or explanation by visualizations.

 1)  Explanation by simplification: Includes techniques like LIME. Local Interpretable Model-Agnostic Explanations approximates an opaque model locally, in the surrounding area of the prediction we are interested in explaining

2)  Feature relevance includes techniques like SHAP (SHapley Additive exPlanations) which builds a linear model around the instance to be explained  and then interpret the coefficients as the feature’s importance. SHAP is similar to LIME but theory behind SHAP is based on coalitional Game Theory(Shapley values).

 

3)  Visual explanations: example ICE (Individual Conditional Expectation) and PD (Partial Dependence) plots, respectively.

 

 You can download the full paper here Principles and Practice of Explainable Machine Learning by Vaishak ...

 

Views: 1842

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2020   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service