Last weekend, I was waiting in New York’s Penn Station, when the public announcer gave the familiar “See Something Say Something” message. It took a minute to sink in, but I had to laugh. Midtown Manhattan **IS** suspicious and unusual activity.

**Speaking of outliers**

In practice, data is dirty and big data is filthy. Analysts munge, wrangle and clean their sources, and a good analysis will recognize the rejected observations. In August, the NY Times joined the recent crowd calling this "janitorial" work and claimed that data scientists spend "50 percent to 80 percent of their time mired in this more mundane labor". It is not glamorous, and it is getting more difficult. But, it is necessary, even priceless.

Suppressing data can be argued well with

**Materiality**- Observations can be dropped if their absence would be insignificant to aggregates and would not change the directional conclusion of the analysis.**Statistics**- Formal methods can be applied for rejecting data. Look at Peirce's criterion, Grubb's test, Chauvenet's criterion, Dixon's Q test or, frankly, propose a new one that sounds as serious.**Reasonableness**- Some elements just don't make sense. If one attribute is wrong, the observation may be considered suspicious and discarded.**Completeness**- Most databases and statistical tools expect NA's, nulls or NANs (not a number). Data can be optional, and processes can be incomplete. So, dropping empty data is tempting.**Error**- The observation violates some stated business rule. Software captures data and software can have bugs. So, we expect and ignore data as defective.

**Missed opportunities**

All those dropped observations have value, though.

First, when we find a problem, we should tell someone. We don't have to, but we should. Like that "See Something, Say Something" announcement, communicating exceptions is an analyst's responsibility. Software gets fixed, other analysts save time, lessons get learned, customers get a better experience.

Second, this data may deserve some digging. If there's a process, people will find a workaround. Machine generated data shows that computers do the same thing with controls. Data exceptions have stories that lead to new business rules and pattern discoveries. As with data errors, we don't have to pursue these stories, but we should. Researching outliers has a poor "a priori" business case. You don't know what you'll find. Tracking the value of what you have already learned is almost as good. That's an anecdotal business case.

The next time a package promises to automatically clean data, report that suspicious and unusual activity to anyone who will listen.

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

**Technical**

- Free Books and Resources for DSC Members
- Learn Machine Learning Coding Basics in a weekend
- New Machine Learning Cheat Sheet | Old one
- Advanced Machine Learning with Basic Excel
- 12 Algorithms Every Data Scientist Should Know
- Hitchhiker's Guide to Data Science, Machine Learning, R, Python
- Visualizations: Comparing Tableau, SPSS, R, Excel, Matlab, JS, Pyth...
- How to Automatically Determine the Number of Clusters in your Data
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- Fast Combinatorial Feature Selection with New Definition of Predict...
- 10 types of regressions. Which one to use?
- 40 Techniques Used by Data Scientists
- 15 Deep Learning Tutorials
- R: a survival guide to data science with R

**Non Technical**

- Advanced Analytic Platforms - Incumbents Fall - Challengers Rise
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- How to Become a Data Scientist - On your own
- 16 analytic disciplines compared to data science
- Six categories of Data Scientists
- 21 data science systems used by Amazon to operate its business
- 24 Uses of Statistical Modeling
- 33 unusual problems that can be solved with data science
- 22 Differences Between Junior and Senior Data Scientists
- Why You Should be a Data Science Generalist - and How to Become One
- Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job
- Why do people with no experience want to become data scientists?

**Articles from top bloggers**

- Kirk Borne | Stephanie Glen | Vincent Granville
- Ajit Jaokar | Ronald van Loon | Bernard Marr
- Steve Miller | Bill Schmarzo | Bill Vorhies

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives**: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central