Home » Uncategorized

Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media

 533026145

Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong.

I actually tend to agree with Geoff.

Explainable AI is overrated and hyped by the media.

And I am glad someone of his stature is calling it out

To clarify, I am not saying that interpretability, transparency, and explainability are not important (and nor is Geoff Hinton for that matter)

A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.

And they use words like discrimination which create a sense of shock-horror.

However, for starters – in most western countries – we already have laws against discrimination.

These are valid irrespective of medium (AI)

Now, let us look at interpretability itself.

Cassie Kozyrkov from Google rightly points out that “It all boils down to how you apply the algorithms. There’s nothing to argue about if you frame the discussion in terms of the project goals.”.

In the simplest sense, this could mean that if you need to provide explainability – you could run another algorithm (such as tree based algorithms) that can explain the result (for example can provide an explanation for the rejection of a loan)

Another great resource is Christoph molnar’s free book on interpretability

I highly recommend it because it shows how complex and multi-faceted the issue is

For example, the book talks about:

  • There is no mathematical definition of interpretability.
  • interpretability has a taxonomy
  • Includes specific Aspects: Intrinsic or post hoc, Feature summary statistic , Feature summary visualization, Model internals (e.g. learned weights) , Data point, Intrinsically interpretable model, Model-specific or model-agnostic?, Local or global?
  • Interpretability has a scope: Algorithm Transparency, Global, Holistic, Modular, Local, For a single prediction etc
  • Evaluation of Interpretability: Application level evaluation (real task), Human level evaluation (simple task), Function level evaluation (proxy task)
  • Properties of Explanation Methods: Expressive, Translucency, Portability, Algorithmic Complexity
  • Properties of Individual Explanations: Accuracy, Fidelity, Consistency, Stability, Comprehensibility, Certainty, Degree of Importance, Novelty, Representativeness:
  • Good Explanations are contrastive, selected, social, focus on the abnormal, are truthful, consistent with prior beliefs of the explainee, are general and probable.

Finally, Patrick Feris provides a good introduction to explainable ai and why we need it

By sharing these links, I hope we can elevate the level of discussion

To conclude, I believe that interpretability is context(project/problem) specific

There are many dimensions/ possible solutions to the problem when seen from a business perspective

Image source: trusted reviews

Leave a Reply

Your email address will not be published. Required fields are marked *