I am reading a very interesting paper called Principles and Practice of Explainable Machine Learning by
Vaishak Belle (University of Edinburgh & Alan Turing Institute) and Ioannis Papantonis (University of Edinburgh) which presents a taxonomy of explainable AI (XAI).
XAI is a complex subject and as far as I can see, I have not yet seen a taxonomy of XAI. Hence, I hope you will find thus useful (paper link below)
The paper itself is detailed and I will explain a small section here adapted from the paper
For opaque models, types of post-hoc explanations include(adapted from the paper):
Model-agnostic Explainability Approaches are designed to be flexible and do not depend on the intrinsic architecture of a model(such as Random forest). These approaches solely relate the inputs to the outputs. Model agnistic approaches could be explanation by simplification, explanation by feature relevance or explanation by visualizations.
1) Explanation by simplification: Includes techniques like LIME. Local Interpretable Model-Agnostic Explanations approximates an opaque model locally, in the surrounding area of the prediction we are interested in explaining
2) Feature relevance includes techniques like SHAP (SHapley Additive exPlanations) which builds a linear model around the instance to be explained and then interpret the coefficients as the feature’s importance. SHAP is similar to LIME but theory behind SHAP is based on coalitional Game Theory(Shapley values).
3) Visual explanations: example ICE (Individual Conditional Expectation) and PD (Partial Dependence) plots, respectively.
You can download the full paper here Principles and Practice of Explainable Machine Learning by Vaishak ...
Posted 12 April 2021
© 2021 TechTarget, Inc.
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central