In this post I will sometimes use a term “variable” for “feature”(“predictor”“) or”outcome“(”predicted value“”).

The question of variable dependencies for a particular data is quite important, because it can help to reduce an amount of predictors used for a model. Or it can tell us what feature is not helpful for a model construction, although it still can be used for engineering of another predictor. For example sometimes it is better to compute speed than to use distance values. In addition some standard algorithms assume independence of features and knowing how close to reality such assumption is useful.

The standard way to check dependencies of variables is to compute their covariance matrix. But it yields only linear dependencies. If dependencies are not linear then the covariance matrix may not pick it up. There are well known and numerous examples so I will not repeat them again.

Let us take a different approach. The definition of independent events is the following equality:

**Pr**(A and B)=**Pr**(A)**Pr**(B).

Hence for dependent events we should have inequality. A simple measure of such disparity is an absolute value of difference of the expressions on the right hand side and on the left hand side:

**Pr**(A and B)−

**Pr**(A)

**Pr**(B)|.

Since in Data Science we work with probability estimations, then the true equality in the first formula is not likely anyway. The question is, how far from zero may be the difference in the second formula for us to believe that considered variables are dependent?

Well, in Data Science we can estimate bounds of a particular value with confidence intervals computed from a given data. For example with R it can be done with package “boot” and with python it is done with “scikits.bootstrap”. Thus confidence intervals of **Pr**(A and B), **Pr**(A) and **Pr**(B) can be estimated with desired degree of probability. What is left to work out is a confidence interval of a product, **Pr**(A)**Pr**(B)

To estimate bounds for the product we can use a standard approach from Numerical Analysis which is used to compute an accrued error of calculation caused by truncation errors.