Summary: Gartner says that predictive analytics is a mature technology yet only one company in eight is currently utilizing this ability to predict the future of sales, finance, production, and virtually every other area of the enterprise. What is the promise of predictive analytics and what exactly are they?
If you are considering a Big Data initiative or more simply now have any sort of data warehouse big or small then you know the most basic truth about the value of that data: You cannot achieve value from data without analytics. Analytics finds the patterns in the data that make it valuable and actionable. There are two broad categories of analytics; the first is Reports, Visualizations, and Dashboards and the second is Predictive Analytics.
Reports, Visualizations, and Dashboards: More than 99% of the analytics produced by companies falls in this category. We have all grown up professionally with the sort of scheduled or ad hoc queries that generate reports about sales or product or performance or the success or failures of any of our business processes. Human beings are good at pattern recognition (within broad limits) and given a good series of tables or graphs (visualizations) then we can draw conclusions such as ‘the sale of widgets to small customers in the northeast is off compared to the west’ so action must be taken.
Some more advanced companies have assembled these graphs and visualizations into groupings that go together to describe a single phenomenon such as sales or profitability and arranged these into ‘dashboards’ allowing humans to cast their talented eyes across many graphs at once in order to spot the pattern. Here’s a typical dashboard that even allows interaction with the data (from the popular reporting program Crystal Reports).
The most important thing to understand about Reports, Visualizations, and Dashboards is that they describe things that happened in the past.
Predictive Analytics: Predictive Analytics is fundamentally different because it predicts what will happen in the future. Taking nothing away from reports and graphs, most of us would say from a business standpoint that accurately predicting the future is more valuable than understanding the past. So why is it that only one of eight companies currently engages in predictive analytics?[i] And this for a technology that Gartner describes as being fully mature. There are several reasons.
A Fundamental Lack of Understanding of What Predictive Analytics is and How it Creates Value: I have been a predictive modeler for over a decade so let me give you an over simplified and too brief description. There are fundamentally two types of predictive models:
Those that describe smooth but variable outcomes in the future (examples would be sales, profit, customer satisfaction, or the output from a manufacturing process such as the production of a petroleum refinery).
Those that describe binary outcomes (buy or didn’t buy, this transaction is fraudulent or honest, this customer is about to defect or continue his relationship with us, this machine is about to fail or will continue to operate, this blood test describes diabetes or a healthy state).
As these examples illustrate predictive models benefit all the major business processes from finance, sales and marketing, production and manufacture, to the supply chain and the overall success of the entire enterprise.
To quote from an excellent Ventana study done in 2012 describing the uses of predictive analytics in companies that have adopted, “marketing (65%) and sales (59%) are the most common (use)” and “the top five sources of data tapped for predictive analytics also relate directly to revenue: customer (69%), marketing (67%), product (55%), sales (54%) and financial (51%)”[ii].
Smooth Forecast Models: This type is easy to understand. Using the tools of predictive analytics we examine a number (even a large number) of variables we believe to be related to the event to be forecast and the predictive analytic tools can identify relationships that forecast a specific numerical outcome. For example, we were able to forecast the market share in the next model year for a new model of automobile in the US using a variety of factors such as price relative to competition, number of dealer outlets, customer satisfaction with the different brands, the number of years since introduction for competitive models, relative equipment loads, and other factors. Other examples include the future price of a security (this type of modeling is the bread and butter of Wall Street quants) or the future wholesale price of electricity a utility will have to pay given a variety of external and internal variables.
Scoring Models: These binary ‘winners and losers’ models are a little more complex. First the outcome, a sale or no sale for example, is reduced to a 0 or 1. Then we gather a sample of recent historical customers who faced this same purchase/no purchase decision and match each customer to a variety of internal and external known variables. You would be surprised how much data you actually already know about your customers, for example, how many times in the recent past they have visited your web site and what specific pages, and from web logs we can get a close approximation of geographic location. We can determine what type of computer they used (from the processor type), and if they bought, what type of credit card was used; had they purchased before, and how recently and how often. That data can be enhanced with external data available from many data consolidators which is applied not only to historical customers but also to those we have previously promoted but didn’t buy, and our list of future prospects.
As an example, I produced a predictive model for a major insurance company to predict the likelihood that existing customers of one type of insurance would be likely to respond positively to a particular promotion and buy an additional type of insurance. My client had over 200 elements of information about each customer and prospect including home ownership, credit card types, credit card balances, number of people living in the residence, specific magazine subscriptions, and on and on. Hope you are not surprised that so much data is available.
We produced a predictive model based on a test sample of the promotion in which buyers and non-buyers were known. Of the 200 available variables (known characteristics of each buyer or non-buyer), 60 were somewhat predictive, and 20 of these were highly predictive. We were able to predict buyers from non-buyers about 80% of the time.
The real value however was that we were able to score each person in the sample with a value from 1 to 100 based on their likelihood to buy. The output of a scoring model is a few lines of computer code that can then be used to score the prospects that have not yet been promoted. Based on the score we found that those with a score over about 70 were sufficiently likely to buy that the economics of the promotion resulted in a profit, and we avoided incurring the cost of promotion to those below 70 who if we had sent the costly promotional offer would have resulted in a financial loss. This kind of model is called a ‘lift model’ and is at the core of almost all behavioral predictive models.
These days, if you call the support center for a major bank using predictive modeling at the end of your conversation the support person is likely to ask if you would be interested in product X, it is likely that you have just been scored in real time and that the most likely product for you to buy has been popped-up along with a suggest script on the support person’s screen just in time for them to engage you. This concept which has been around for some time is now being labeled ‘prescriptive analytics’ by Gartner.
Does Big Data Change this Landscape?
Up to this point these comments have been directed at how to extract value from the structured data that you already have or could add from outside sources such as append services. Does the arrival of Big Data still fall into these two categories, Visualizations and Predictive Analytics? Well, sort of.
Not to try to get off the hook, Big Data Analytics has introduced at least two new forms of predictive analytics that are perhaps less ‘accurate’ than predictive analytics on structured data but allow new types of insights that are valuable and were very difficult to achieve in SQL/RDBMS systems.
I’ve used quotation marks around ‘accurate’ describing predictive analytics on structured data because as any good analyst will tell you these models are not perfect. Useful predictive models will be generally in the range of 70% to 95% predictive of future events depending on a whole host of factors too detailed to describe here. If we were betting on horses you’d have to admit that anything better than 50/50 would give you an edge.
The distinction here is that these new types of Big Data Analytics are more directional and advisory than explicitly predictive.
Recommenders: If you have really large amounts of data as do Amazon, Ebay, Facebook, or similar Big Web User apps, you can make useful predictions of what User A may want based on evaluation of what other similar users have selected. Is it predictive to say that User A may want to buy one of these 20 recommended books or songs? It is when the universe of choices is overwhelmingly large and the use of Recommendation Engines running on NoSQL Graph or Column-Oriented databases increases sales by presenting attractive options.
It’s worth noting that this explicitly extends to human relationships. NoSQL Graph DBs can show all types of hidden connections among people (friends of friends, alumni, work alumni, people who like dogs, singles who are in the same bar as you right now) and any cruiser of the internet will recognize these at work from LinkedIn to Foursquare.
Natural Language Processing: NLP, also known as Text Analysis or even Sentiment Analysis (a more limited form) is able to extract value from completely unstructured text. One application is to judge consumer and prospect feelings toward a product or service based on certain key words and phrases scraped from thousands or even millions of social media comments. ‘The price is too high’, ‘the service was poor but the food was good’, ‘it broke after a few day’ – these are the sort of comments that can give valuable insights within a few days or weeks of an event.
Second in this category is the batch or near-real-time analysis of text streams, for example from customer service logs or from customer service conversations as they are happening. Again, based on key words and phrases (some have gone as far as to introduce voice stress analysis as a variable) the CSR can anticipate what solution may be best or even what cross sell opportunity may be most successful. In this last case in particular there can be a marriage of current transactional structured data (e.g. what did they buy last and when) that can be added to increase accuracy. Does pure Natural Language Processing rise to the same level of predictive accuracy as predictive models based on traditional structured data? No. Does it offer insights that are at least directional and close in time to the event? Yes, and that makes it valuable.
All these types of predictive analytic models can have huge positive financial impacts at the level of individual campaigns, entire operating units like sales or production, or on the achievement of the financial bottom line of the entire enterprise. Executives, managers, champions at all levels, if you have not examined the value of these tools, find an expert and have a serious exploratory conversation.
[i] Predictive Analytics, Improving Performance by Making the Future More Visible, Ventana Research, sponsored by SAP, Feb. 2012.
Bill Vorhies, President & COO – Data-Magnum - © 2014, all rights reserved.
About the author: Bill Vorhies is President & COO of Data-Magnum and has practiced as a data scientist and commercial predictive modeler since 2001. He can be reached at:
The original blog can be seen at: