For decision making, human perception tends to arrange probabilities into above 50% and below - which is plausible. For most probabilistic models in contrast, this is not the case at all. Frequently, resulting probabilities are neither normal distributed between zero and one with a mean of 0.5 nor correct in terms of absolute values. This is not seldom an issue accompanied with the existence of a minority class - in the underlying dataset.

*For example, if the result of a probabilistic model of having an accident given a blood alcohol Level of 0.5 %o is 40% does not necessarily mean that you should predict this case as no accident.*

Examining the probability distribution, you might notice a concentration into the value of zero. This is not wrong at all costs, but you can easily validate whether it is better to adjust your cutoff criterion by setting it down - or up. ROC helps as well.

If you have doubts regarding the shape of the probability-distribution (of the results), you can reshape it:

Supposed you found that the cutoff should be at 40% instead of 50%. So, you know three things:

- p = 0 should remain 0
- p = 1 should remain 1
- p = 40% should be 50%

A root-function fulfills the first two requirements. The rest is simple mathematics.

0.5 = 0.4^x

log(0.5) / log(0.4) = x

log(0.5) / log(cutoff) = x

With this root-function you can adjust all probability-results. At least slightly, this exponent can remove the slope of the probability distribution into zero. The lower the probability the stronger the effect.

You will see that it performs better in many cases - instead of just setting down the cutoff criterion.

© 2020 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central