*Summary:** Here’s an easy to understand example of how predictive analytics can reduce cost while increasing efficacy of disease management programs.*

Healthcare providers have made major breakthroughs over the last two decades by creating and implementing increasingly sophisticated disease management programs (DMPs). At their core there are always two motives, improve the human condition by preventing lesser symptomologies from becoming more severe diseases, and secondly to reduce total systems cost by using less expensive DMPs to prevent or delay the onset of a disease more costly to treat.

In the last few years, the most advanced and analytic of healthcare providers have discovered that DMPs themselves can be optimized using the rapidly spreading techniques of predictive modeling. This is a hypothetical example of how a DMP can be optimized in this way.

**What Does It Mean to Optimize a DMP?**

Optimizing a DMP means recognizing that not all the members of the targeted pre-disease group will respond equally to the DMP. Does it make sense to provide the DMP and incur the investment of the program for individuals with provable very low probability of responding positively? Probably not. Were we able to know in advance that specific individuals were very unlikely to respond, it is likely we would elect to not provide the DMP but perhaps provide an alternate course of action.

Optimization can be calculated against many standards. For example, creating the greatest number of cases in which the severe disease does not manifest means providing a very large array of interventions including the DMP and incurring very high cost. Most providers recognize that this is simply not possible since funds are not unlimited. In this example we will illustrate a simple financial optimization but the cutoff or breakeven point can be adjusted by the healthcare provider management to meet any combination of financial and ethical criteria desired.

In our example, based loosely on a pre-diabetic DMP designed to prevent or defer the onset of full diabetes we will start with the following assumptions:

- Population currently eligible for the DMP: 100,000.
- Cost per targeted member to provide the DMP: $200.
- On an annual basis then the cost to provide the DMP to the entire group is $20,000,000.
- The higher cost to treat a member if the diabetes fully manifests is $500. This is the increased cost the DMP seeks to avoid or delay.
- The DMP is successful in preventing 40% of the population from converting to full diabetes. The savings then is the $500 treatment cost on the 40,000 who did not convert or $20,000,000. The program is a breakeven.

*We acknowledge that a multi-year present value model of total DMP and treatment cost would be more accurate but for the purposes of this example we are using this simplified calculation to illustrate more clearly the impact that optimization with predictive modeling can have.*

**Optimizing with Predictive Modeling**

Using predictive modeling techniques applied to member data already on hand including diagnostic, treatment, and pharmacy codes plus demographics, we create a model (an algorithm) that calculates a score for all members of the population based on their likelihood to respond positively to the DMP. This technique is widely used in many industries and is based on a known sample of members who both did and did not respond to the DMP. The predictive modeling tools, sometimes called machine-learning tools, are capable of detecting even very weak correlations among large numbers of variables and produce the scoring algorithm that is then applied to previously unseen member data to predict who in the future is most likely to respond.

The accuracy of these models can be determined during development and it is typical for accuracy to be in the 70% to 90% range, sometimes higher. However, even models that are less accurate in aggregate can be quite accurate at the margins and therefore useful. Your professional predictive modeler will guide you in interpreting the implications.

In our example, we use an actual scoring model that scored slightly over 79% accuracy. That is, in about 8 cases out of 10 the model would correctly categorize members who would respond to the DMP versus those who would not. The technique then calls for us to score all members and rank the scores from high to low. After having done this, we will look at the cost versus savings for each decile of the sample (a decile is simply 1/10th of the sample). A typical analysis using the 79% model would look like this.

Since we have sorted the sample by score before dividing it into the 10 deciles, the first decile will contain the highest scoring members who are predicted to be most likely to respond. Conversely in the 10^{th} decile are all members scoring lowest and therefore predicted to be least likely to respond. This allows us to concentrate responders together so that in the first decile, without scoring we would have expected 10% of responders (4,000 would be expected and would be the same in each decile without scoring) but now the predictive model has allowed us to concentrate almost twice that many (18.4%) into just the first decile. Conversely, the 10^{th} decile now contains less than 1% of those who respond positively.

Comparing the savings for each decile (savings from responders less the cost of the DMP) allows us to calculate savings by decile. Using only those deciles where the savings is positive turns this program which was previously breakeven into one which now saves a net $4.3 Million and may be more responsible in allocating scarce resources to those mostly likely to respond.

Note that there is a medical cost savings of $228 for each of the 7,351 responder in the first decile. There is an additional medical cost incurred of $4,920 for each of the 369 responders in the 10^{th} decile when both the cost of the DMP and treatment cost are considered.

Should the 6^{th} or 7^{th} decile members continue to receive the DMP even though it is a money losing proposition? This is open to interpretation by the healthcare provider's management, but now you have quantifiable data on which to proceed.

*Note that best practice is to divide the population into demi-deciles (1/20) for finer gradation. Simplified here for clarity.*

December 3, 2013

Bill Vorhies, President & Chief Data Scientist – Data-Magnum - © 2014, all rights reserved.

About the author: Bill Vorhies is President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist and commercial predictive modeler since 2001. He can be reached at:

The original blog can be viewed at:

http://data-magnum.com/optimizing-disease-management-programs-using...

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central