Subscribe to DSC Newsletter

Synthetic criterion to choose the right variables for your predictive algorithm

The success of any big data or data science initiative is determined by the kind of data that you collect, and how you analyze it. In this article, we describe a simple criterion to select great metrics out of dozens, hundreds or even millions of potential predictors - sometimes called features or rules by machine learning professionals, or independent variables, by statisticians.

Source for above picture

This criterion is especially useful for practitioners lacking domain expertize or vision, and can be applied automatically. It belongs to a class of synthetic metrics that we have developed in our data science research laboratory.

Traditional metrics for variable selection are subject to biases and derived exclusively for their mathematical elegancy. To the contrary, synthetic metrics are derived for their usefulness, and take advantage of modern computing power.

In a nutshell, synthetic metrics are to traditional metrics what synthetic pharmaceutical drugs are to traditional drugs: more powerful, more flexible, offering far more opportunities, and less expensive. Our research lab has created a few synthetic metrics so far, including synthetic confidence intervals most recently.

Let's discuss our synthetic predictive power - a metric that measures how good a variable is at predicting the future. The traditional equivalent is the R-Squared, subject to over-fitting, and sensitive to outliers. A more modern metric is entropy - still very much a natural metric, derived from physics principles rather than built from scratch.

The predictive power W is an hybrid (semi-synthetic, close to entropy), easy-to-interpret, robust statistic to measure the strength of a predictor. It is defined in our article on feature selection (see subsection called predictive power). In the context of scoring and fraud detection models, it has the following properties:

  • It is minimum and equal to 0 when it does not provide any information about fraud / non fraud,
  • It is maximum and equal to 1 when we have perfect discrimination between fraud and non-fraud, in a given data bin.
  • It is symmetric: So if you swap Good and Bad (G and B in a fraud or spam detection context), it still provides the same predictive power.

This metric W has been used to automatically and very quickly select multivariate features out of trillions of trillions of combinations, to optimize predictive scoring systems, in particular in data science techniques such as hidden decision trees. The source code and examples will be published shortly in our upcoming book on automated data science, and some can be found in my recent data science book.

About the author

Dr. Vincent Granville is a visionary data scientist, author, publisher, entrepreneur, growth hacker, and co-founder of Data Science Central.

Views: 3325

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service