Subscribe to DSC Newsletter

The Value of Accuracy in Predictive Analytics

This article was first posted in 2014 but the message bears repeating.  There is a lot being written about tools simple enough for the citizen data scientist to operate.  The unstated constraint is that if you don't have significant experience in data science then these will always be "good enough" models.  The problem is that 'good enough' models under achieve both revenue and profit.  Very small increases in model fitness can translate into much larger increases in campaign ROI.  Business sponsors may say they want a quick answer.  It's up to data scientists to show them that more effort on accuracy more than pays off.

Summary:  Many first time users of predictive models are happy to have the benefit of a good model with which to target their marketing initiatives and don’t ask the equally important question, is this the best model we can be using?  This study will demonstrate that a very small improvement in the accuracy of the model can result in very large financial gains.  This case study illustrates that a change in fitness of only 0.01 point can mean a financial improvement of nearly 8% in campaign ROI.

Factors that control the accuracy of a predictive model

The accuracy of a model is controlled by three major variables: 1). First and foremost the ability of your data to be predictive.  There is an unknown and fixed limit to which any data can be predictive regardless of the tools used or experience of the modeler.  2.) The experience and skill of the modeler.  3.) The tools selected.  Some tools are designed to give very quick if somewhat approximate results.  Other tools are inherently more accurate if somewhat slower. 

Models can frequently be improved through better selection or preparation of the data including the addition of appended data.  However, even when the data is exactly the same, the selection of the modeling tool can be critical.

Many modelers tend to utilize only one tool in creating their models, frequently the one they are most comfortable with or were initially trained on, logistic regression, neural nets, decision trees, Bayesian classifiers, support vector machines, or genetic programs.  Not all tools create equally accurate answers when applied to the same data sets.

How important is accuracy?  This case study illustrates that a change in fitness of only 0.01 point can mean a financial improvement of nearly 8% in campaign ROI.  Greater increases in model quality will translate into higher percentages of financial improvement.  The benefit each user actually receives will depend on how much the model can be improved and the financial details of the offer, but this example should make one thing clear, small increases in model quality can translate to large increases in financial performance.

Example:

This example is based on actual data from a major technology and services company pursuing cross sell or up sell opportunities with their existing customers.  It would be equally true of initiatives aimed at new customer acquisition or customer retention (churn/defection prevention) campaigns, or to any of the other major uses of scoring (regression) models such as fraud detection, credit scoring, or billing review.

The data is from a large direct mail test where the overall response rate was found to be 1%, very typical for this type of campaign.  In our simplified example we assume a full mailing to all available targets would be 250,000 pieces at a cost of $3.00 per mailing, and with a gross profit of $300 per successful sale.

This means that a mailing to all 250,000 targets would require an investment of $750,000 and would return $750,000.  Most business managers would regard this as a bad investment and would elect not to conduct the full mailing, counting the cost of the test mailing as the sunk cost of an unsuccessful promotion.

To illustrate the difference that small improvements in accuracy can make, we developed two models, one with a fitness measure of .195064 and the other with a fitness measure of .182995, only .012069 between them.  The fitness measure is the remaining unexplained difference between the actual data and the model.  Lower scores are better.  A fitness measure of 0.00 means the model completely explains and predicts the actual data so both these models show good and useful predictive ability, explaining more than 80% of the difference between the actual results and the model.

In the table below, the business manager evaluates the less accurate of the two models and finds that his mailing can yield a good profit, $163,043 if he only mails to the top 50% of the list.  The model has scored all prospects from 0 to 1 based on their likelihood to buy, and after evaluating the net profit (projected profit from sales less the cost of mailing) for each decile of the list (a decile equals 10% of the list, a very common division for this analysis) sees that the bottom half of the list is a money-losing proposition but that the top half is profitable.  This table is known as a lift analysis.

 

However, if the manager had the benefit of the better model (table 2), and better by only 0.012 points in fitness, he now forecasts a profit of $175,679, an improvement of 7.75%.

 

Small improvements in model accuracy can make big improvements in financial outcome.  Be sure to ask the question:  Is this really the most accurate model that can be created from my data?


Bill Vorhies
Editorial Director, DSC

Views: 4391

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Baguinebie Bazongo on July 27, 2015 at 9:02pm

Very interesting post

Thank William

Comment by William Vorhies on October 24, 2014 at 1:15pm

Ralph:

Good point.  Too many times I've seen clients take the first model they could optimize for the day and call it quits without trying alternate ML tools.  It's almost too easy to show them that they could be doing better.

Comment by Ralph Winters on October 24, 2014 at 11:01am

One could also make the argument that If a small improvement in the model accuracy make a big difference in the results, possibly the model is not as robust as you would like, and could need constant monitoring.

Comment by Kumaran Ponnambalam on October 8, 2014 at 7:01pm

Interesting post

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service