Subscribe to DSC Newsletter

Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media

 

Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong.

 

I actually tend to agree with Geoff.

Explainable AI is overrated and hyped by the media.

 

And I am glad someone of his stature is calling it out

 

To clarify, I am not saying that interpretability, transparency, and explainability are not important (and nor is Geoff Hinton for that matter)

A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.

And they use words like discrimination which create a sense of shock-horror.

However, for starters – in most western countries – we already have laws against discrimination.

These are valid irrespective of medium (AI)

Now, let us look at interpretability itself.

 

Cassie Kozyrkov from Google rightly points out that “It all boils down to how you apply the algorithms. There’s nothing to argue about if you frame the discussion in terms of the project goals.”.

In the simplest sense, this could mean that if you need to provide explainability – you could run another algorithm (such as tree based algorithms) that can explain the result (for example can provide an explanation for the rejection of a loan)

Another great resource is Christoph molnar’s free book on interpretability

I highly recommend it because it shows how complex and multi-faceted the issue is

For example, the book talks about:

  • There is no mathematical definition of interpretability.
  • interpretability has a taxonomy
  • Includes specific Aspects: Intrinsic or post hoc, Feature summary statistic , Feature summary visualization, Model internals (e.g. learned weights) , Data point, Intrinsically interpretable model, Model-specific or model-agnostic?, Local or global?
  • Interpretability has a scope: Algorithm Transparency, Global, Holistic, Modular, Local, For a single prediction etc
  • Evaluation of Interpretability: Application level evaluation (real task), Human level evaluation (simple task), Function level evaluation (proxy task)
  • Properties of Explanation Methods: Expressive, Translucency, Portability, Algorithmic Complexity
  • Properties of Individual Explanations: Accuracy, Fidelity, Consistency, Stability, Comprehensibility, Certainty, Degree of Importance, Novelty, Representativeness:
  • Good Explanations are contrastive, selected, social, focus on the abnormal, are truthful, consistent with prior beliefs of the explainee, are general and probable.

 

Finally, Patrick Feris provides a good introduction to explainable ai and why we need it

 

By sharing these links, I hope we can elevate the level of discussion

 

To conclude, I believe that interpretability is context(project/problem) specific

There are many dimensions/ possible solutions to the problem when seen from a business perspective

 

Image source: trusted reviews

Views: 1019

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by ajit jaokar on December 28, 2018 at 10:38pm

Thanks for your comments Richard. Yes, the Molnar book gives some answers but I suspect still many questions. Happy holidays!

Comment by Richard Huddleston on December 28, 2018 at 8:08pm

Is there an agreed-upon distinction between explainability, transparency, and interpretability?  Or are they largely being used interchangeably? **

My personal thumbnail sketch, drawn from my own practice:

  • Explainability is what we need for use cases where liability is held for bad outcomes (obvious example:  fatal accidents involving one or more autonomous cars / aircraft).  It tells us why the model produced its output, as directly as possible. 

    Those use cases do exist, and where financial / tort liability is involved then we can expect that explainability will be required by statute / policy.  I would say further that the responsible use of AI for this class of use cases requires practitioners to provide explainability before it is legally required.

  • Transparency provides assurance that the model is trained / inferring on the correct data features.  I'd like to know that, say, my horse-image classifier hasn't learned to relate equestrian stable logos in the bottom right corner of the image dataset with what a horse is, even if it scores really well on that data.

  • Interpretability seems sensibly defined as the model producing output that is comprehendible and clear in meaning. 

    Given that definition, who would argue against needing interpretability?  


If these terms are used interchangeably, I'd argue that's an error and creates a tendency towards strawman arguments.

** I'll read Molnar's book over the remainder of the holiday break.  Perhaps these terms are distinctively defined there.  I'm still eager to find them defined distinctively elsewhere in the literature (and not that I've read everything).

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service