The previous posts in this series have covered several ways that business leaders can use to understand and explore how Artificial Intelligence can impact their business. We saw that there are several key ways in which AI advances can improve human productivity in organizations. The last two articles dived into Distillation: automating the path to value, and Categorization: managing data at scale. In this article, we’ll look at a third type of AI applications: Prediction.
Prediction is applying AI approaches to learn from past (and possibly other) data to predict what will happen. An excellent example is the Spam Filter algorithm used in email systems. Based on past email that has been identified as Spam or not Spam (frequently called Ham) then a predictive model can be developed that will predict whether a new, never before seen, email is Spam or Ham.
The business value of having predictive AI can be enormous. Being able to react and respond pre-emptively to changes in supply chains can minimize costs Amazon (being a great example), while recognizing shifts in customer perception and sentiment can ensure brand value is protected when external events change quickly (for example,recent events with United Airlines).
On the technical front, there are many approaches to building predictive models: Bayesian methods, classical loss minimization, etc. and there are numerous flavours of predictive models: classic decision trees and numerous deep learning neural nets for example. However, a frequently neglected aspect of predictive models in business applications is the value, and often the requirement, of explaining the prediction.
For instance, Deep Learning is an incredibly powerful method for building predictive models (e.g. does this video have a cat in it?) but suffers from challenges around explaining the prediction. In many business applications, the why can be as important as the what. Not just for the utility of the prediction (i.e. making sure business can action on it appropriately) but also in adopting predictive model approaches period (i.e. socializing and becoming comfortable with algorithmic predictions).
Many classical predictive model flavours are readily interpretable. For instance, a decision tree model clearly indicates why the input yielded the predicted response. However, these models frequently underperform more advanced model types, such as Deep Learning models. Making Deep Learning interpretable is an active area of research, and recent approaches such as Local Interpretable Model Explanations are showing promise.
This concludes our brief survey of AI technologies and the ways in which they can impact businesses. The three key types of approaches discussed here – Distillation, Categorization, and Prediction – are a convenient framework to consider where and how business challenges can be impacted by AI, and I hope readers find it useful in their own endeavours.
Roy Wilds, PhD is the Chief Data Scientist at PHEMI Systems, a big data warehouse solutions company.