I am reading a very interesting paper called Principles and Practice of Explainable Machine Learning by

Vaishak Belle (University of Edinburgh & Alan Turing Institute) and Ioannis Papantonis (University of Edinburgh) which presents a **taxonomy of explainable AI (XAI).**

XAI is a complex subject and as far as I can see, I have not yet seen a taxonomy of XAI. Hence, I hope you will find thus useful (paper link below)

The paper itself is detailed and I will explain a small section here adapted from the paper

For opaque models, types of post-hoc explanations include(adapted from the paper):

**Text explanations**produce explainable representations utilizing symbols, such as natural language text.**Visual explanation**aim at generating visualizations that facilitate the understanding of a model.**Local explanations**attempt to explain how a model operates in a certain area of interest.**Explanations by example**extract representative instances from the training dataset in order to demonstrate how the model operates.**Explanations by simplification**refer to the techniques that approximate an opaque model using a simpler one, which is easier to interpret**Feature relevance**explanations attempt to explain a model’s decision by quantifying the influence of each input variable.

**Model-agnostic Explainability Approaches** are designed to be flexible and do not depend on the intrinsic architecture of a model(such as Random forest). These approaches solely relate the inputs to the outputs. Model agnistic approaches could be explanation by simplification, explanation by feature relevance or explanation by visualizations.

1) **Explanation by simplification**: Includes techniques like LIME. Local Interpretable Model-Agnostic Explanations approximates an opaque model locally, in the surrounding area of the prediction we are interested in explaining

2) **Feature relevance** includes techniques like SHAP (SHapley Additive exPlanations) which builds a linear model around the instance to be explained and then interpret the coefficients as the feature’s importance. SHAP is similar to LIME but theory behind SHAP is based on coalitional Game Theory(Shapley values).

3) **Visual explanations:** example ICE (Individual Conditional Expectation) and PD (Partial Dependence) plots, respectively.

You can download the full paper here Principles and Practice of Explainable Machine Learning by Vaishak ...

© 2020 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central