The success of any big data or data science initiative is determined by the kind of data that you collect, and how you analyze it. In this article, we describe a simple criterion to select great metrics out of dozens, hundreds or even millions of potential predictors - sometimes called features or rules by machine learning professionals, or independent variables, by statisticians.
Source for above picture
This criterion is especially useful for practitioners lacking domain expertize or vision, and can be applied automatically. It belongs to a class of synthetic metrics that we have developed in our data science research laboratory.
Traditional metrics for variable selection are subject to biases and derived exclusively for their mathematical elegancy. To the contrary, synthetic metrics are derived for their usefulness, and take advantage of modern computing power.
In a nutshell, synthetic metrics are to traditional metrics what synthetic pharmaceutical drugs are to traditional drugs: more powerful, more flexible, offering far more opportunities, and less expensive. Our research lab has created a few synthetic metrics so far, including synthetic confidence intervals most recently.
Let's discuss our synthetic predictive power - a metric that measures how good a variable is at predicting the future. The traditional equivalent is the R-Squared, subject to over-fitting, and sensitive to outliers. A more modern metric is entropy - still very much a natural metric, derived from physics principles rather than built from scratch.
The predictive power W is an hybrid (semi-synthetic, close to entropy), easy-to-interpret, robust statistic to measure the strength of a predictor. It is defined in our article on feature selection (see subsection called predictive power). In the context of scoring and fraud detection models, it has the following properties:
This metric W has been used to automatically and very quickly select multivariate features out of trillions of trillions of combinations, to optimize predictive scoring systems, in particular in data science techniques such as hidden decision trees. The source code and examples will be published shortly in our upcoming book on automated data science, and some can be found in my recent data science book.
About the author