The success of any big data or data science initiative is determined by the kind of data that you collect, and how you analyze it. In this article, we describe a simple criterion to select great metrics out of dozens, hundreds or even millions of potential predictors - sometimes called features or rules by machine learning professionals, or independent variables, by statisticians.

*Source for above picture*

This criterion is especially useful for practitioners lacking domain expertize or vision, and can be applied automatically. It belongs to a class of synthetic metrics that we have developed in our data science research laboratory.

Traditional metrics for variable selection are subject to biases and derived exclusively for their mathematical elegancy. To the contrary, synthetic metrics are derived for their usefulness, and take advantage of modern computing power.

In a nutshell, synthetic metrics are to traditional metrics what synthetic pharmaceutical drugs are to traditional drugs: more powerful, more flexible, offering far more opportunities, and less expensive. Our research lab has created a few synthetic metrics so far, including synthetic confidence intervals most recently.

Let's discuss our synthetic predictive power - a metric that measures how good a variable is at predicting the future. The traditional equivalent is the R-Squared, subject to over-fitting, and sensitive to outliers. A more modern metric is entropy - still very much a natural metric, derived from physics principles rather than built from scratch.

The predictive power W is an hybrid (semi-synthetic, close to entropy), easy-to-interpret, robust statistic to measure the strength of a predictor. It is defined in our article on feature selection (see subsection called predictive power). In the context of scoring and fraud detection models, it has the following properties:

- It is minimum and equal to 0 when it does not provide any information about fraud / non fraud,
- It is maximum and equal to 1 when we have perfect discrimination between fraud and non-fraud, in a given data bin.
- It is symmetric: So if you swap Good and Bad (G and B in a fraud or spam detection context), it still provides the same predictive power.

This metric W has been used to automatically and very quickly select multivariate features out of trillions of trillions of combinations, to optimize predictive scoring systems, in particular in data science techniques such as hidden decision trees. The source code and examples will be published shortly in our upcoming book on automated data science, and some can be found in my recent data science book.

**About the author**

Dr. Vincent Granville is a visionary data scientist, author, publisher, entrepreneur, growth hacker, and co-founder of Data Science Central.

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

**Technical**

- Free Books and Resources for DSC Members
- Learn Machine Learning Coding Basics in a weekend
- New Machine Learning Cheat Sheet | Old one
- Advanced Machine Learning with Basic Excel
- 12 Algorithms Every Data Scientist Should Know
- Hitchhiker's Guide to Data Science, Machine Learning, R, Python
- Visualizations: Comparing Tableau, SPSS, R, Excel, Matlab, JS, Pyth...
- How to Automatically Determine the Number of Clusters in your Data
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- Fast Combinatorial Feature Selection with New Definition of Predict...
- 10 types of regressions. Which one to use?
- 40 Techniques Used by Data Scientists
- 15 Deep Learning Tutorials
- R: a survival guide to data science with R

**Non Technical**

- Advanced Analytic Platforms - Incumbents Fall - Challengers Rise
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- How to Become a Data Scientist - On your own
- 16 analytic disciplines compared to data science
- Six categories of Data Scientists
- 21 data science systems used by Amazon to operate its business
- 24 Uses of Statistical Modeling
- 33 unusual problems that can be solved with data science
- 22 Differences Between Junior and Senior Data Scientists
- Why You Should be a Data Science Generalist - and How to Become One
- Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job
- Why do people with no experience want to become data scientists?

**Articles from top bloggers**

- Kirk Borne | Stephanie Glen | Vincent Granville
- Ajit Jaokar | Ronald van Loon | Bernard Marr
- Steve Miller | Bill Schmarzo | Bill Vorhies

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives**: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central