Home » Technical Topics » Knowledge Engineering

Setting the Cutoff Criterion for Probabilistic Models

For decision making, human perception tends to arrange probabilities into above 50% and below – which is plausible. For most probabilistic models in contrast, this is not the case at all. Frequently, resulting probabilities are neither normal distributed between zero and one with a mean of 0.5 nor correct in terms of absolute values. This is not seldom an issue accompanied with the existence of a minority class – in the underlying dataset.

For example, if the result of a probabilistic model of having an accident given a blood alcohol Level of 0.5 %o is 40% does not necessarily mean that you should predict this case as no accident.

Examining the probability distribution, you might notice a concentration into the value of zero. This is not wrong at all costs, but you can easily validate whether it is better to adjust your cutoff criterion by setting it down – or up. ROC helps as well.

If you have doubts regarding the shape of the probability-distribution (of the results), you can reshape it:

Supposed you found that the cutoff should be at 40% instead of 50%. So, you know three things:

  1.  p = 0 should remain 0
  2.  p = 1 should remain 1
  3.  p = 40% should be 50%

A root-function fulfills the first two requirements. The rest is simple mathematics.

0.5 = 0.4^x 

log(0.5) / log(0.4) = x

log(0.5) / log(cutoff) = x

With this root-function you can adjust all probability-results. At least slightly, this exponent can remove the slope of the probability distribution into zero. The lower the probability the stronger the effect. 

You will see that it performs better in many cases – instead of just setting down the cutoff criterion.