Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong.
I actually tend to agree with Geoff.
Explainable AI is overrated and hyped by the media.
And I am glad someone of his stature is calling it out
To clarify, I am not saying that interpretability, transparency, and explainability are not important (and nor is Geoff Hinton for that matter)
A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.
And they use words like discrimination which create a sense of shock-horror.
However, for starters – in most western countries – we already have laws against discrimination.
These are valid irrespective of medium (AI)
Now, let us look at interpretability itself.
Cassie Kozyrkov from Google rightly points out that “It all boils down to how you apply the algorithms. There’s nothing to argue about if you frame the discussion in terms of the project goals.”.
In the simplest sense, this could mean that if you need to provide explainability – you could run another algorithm (such as tree based algorithms) that can explain the result (for example can provide an explanation for the rejection of a loan)
Another great resource is Christoph molnar’s free book on interpretability
I highly recommend it because it shows how complex and multi-faceted the issue is
For example, the book talks about:
Finally, Patrick Feris provides a good introduction to explainable ai and why we need it
By sharing these links, I hope we can elevate the level of discussion
To conclude, I believe that interpretability is context(project/problem) specific
There are many dimensions/ possible solutions to the problem when seen from a business perspective
Image source: trusted reviews