Home » Uncategorized

Competition: Explaining black box machine learning models

The Explainable Machine Learning Challenge is a collaboration between Google, FICO and academics at Berkeley, Oxford, Imperial, UC Irvine and MIT, to generate new research in the area of algorithmic explainability. Teams will be challenged to create machine learning models with both high accuracy and explainability; they will use a real-world financial dataset provided by FICO. Designers and end users of machine learning algorithms will both benefit from more interpretable and explainable algorithms. Machine learning model designers will benefit from Model explanations, written explanations describing the functioning of a trained model. These might include information about which variables or examples are particularly important, they might explain the logic used by an algorithm, and/or characterize input/output relationships between variables and predictions. We expect teams to tell the story of their model such that these explanations will be qualitatively evaluated by data scientists at FICO.

2808357464

Complex machine learning models have recently achieved great predictive successes for many applications. While these models excel at capturing complex, non-linear relationships between variables, it is often the case that neither the trained model nor its individual predictions are readily explainable. In settings where regulators or consumers demand explanations, understanding the structure and predictions of these models will pave the way for their wide adoption in practice. Explainability will also help data scientists understand their datasets and the models’ predictions, uncover and correct for biases, and ultimately create better models.

For details, click here

DSC Resources