Machine Learning vs. Traditional Statistics: Different philosophies, Different Approaches

"Machine Learning (ML)" and "Traditional Statistics(TS)" have different philosophies in their approaches. With "Data Science" in the forefront getting lots of attention and interest, I like to dedicate this blog to discuss the differentiation between the two. I often see discussions and arguments between statisticians and data miners/machine learning practitioners on the definition of "data science" and its coverage and the required skill sets. All is needed, is just paying attention to the evolution of these fields.


There is no doubt that when we talk about "Analytics," both data mining/machine learning and traditional statisticians have been a player. However, there is a significant difference in approach, applications, and philosophies of the two camps that is often overlooked.

What is ML?

ML is a branch of Artificial Intelligence (AI). AI focuses on understanding intelligence and how to replicate it in machines (systems or agents). ML aims at automatic discovery of regularities in data through the use of computer algorithms and generalizing those into new but similar data. Its main focus is the study and design of systems that can “learn from data” and its focus is inductive learning (learning by examples). ML is not the same as “data mining” or “predictive analytics” that are practices but a core part of both.
ML Roots started in 1950’s and many startups formed in late 80’s, early 90’s with applications such as real-time fraud detection, character recognition, and recommendation systems to be commercially successful (first generation ML systems). ML is also closely related to “Pattern Recognition (PR). While ML grew out of computer science, Pattern Recognition has engineering roots. The two however are facets of the same field where focus in both is learning from data. Today ML resurgence is the driver of the next big wave of innovation.

ML Application Variety

Data mining and predictive analytics

  • Fraud detection,  ad placement, credit scoring, recommenders, drug design, stock trading, customer relationship & experience, …

Text processing & analysis

  • Web search, spam filtering, sentiment analysis, …

Graph mining


  • Speech recognition, human genome, bioinformatics, optical character recognition (OCR), face recognition, self-driving cars, scene analysis, …

ML Community/Practitioners

  • Typically computer science and/or engineering background
  • More programming savvy
  • Not confined with a single tool
  • Open-source friendly
  • Rapid prototyping of the ideas/solutions desired

ML vs. Traditional Statistics

Historically, ML techniques and approach heavily relies on computing power. On the other hand, TS techniques were mostly developed where computing power was not an option. As a result, TS heavily relies on small samples and heavy assumptions about data and its distributions,
ML in general tends to make less pre-assumptions about the problem and is liberal in its approaches and techniques to find a solution, many times using heuristics. The preferred learning method in machine learning and data mining is inductive learning. At its extreme, in inductive learning the data is plentiful or abundant, and often not much prior knowledge exists or is needed about the problem and data distributions for learning to succeed. The other side of the learning spectrum is called analytical learning, (deductive learning), where data is often scarce or it is preferred (or customary) to work with small samples of it. There is also good prior knowledge about the problem and data. In real world, one often operates between these two extremes. On the other hand, traditional statistics is conservative in its approaches and techniques and often makes tight assumptions about the problem, especially data distributions.
The following table shows some of the differences in approach and philosophy between the two fields:

Machine Learning (ML)
Traditional statistics (TS)
Goal: “learning” from data of all sorts
Goal: Analyzing and summarizing data
No rigid  pre-assumptions about the problem and data distributions in general
Tight assumptions about the problem  and data distributions
More liberal in the techniques and approaches
Conservative in techniques and approaches
Generalization is pursued empirically through training, validation and test datasets
Generalization is pursued using statistical tests on the training dataset
Not shy of using heuristics in approaches in search of a “good solution”
Using tight initial assumptions about data and the problem, typically in search of an optimal solution under those assumptions
Redundancy in features (variables) is okay, and often helpful. Preferable to use algorithms designed to handle large number of features
Often requires independent features. Preferable to use less number of input features
Does not promote data reduction prior to learning. Promotes a culture of abundance: “the more data, the better”
Promotes data reduction as much as possible before modeling (sampling, less inputs, …)
Has faced with solving more complex problems in learning, reasoning, perception, knowledge presentation, …
Mainly focused on traditional data analysis
Learning can be achieved by manually writing a program covering all possible data patterns. This is exhaustive work and is generally impossible to accomplish for real-world problems. In addition, this program will never be as good or as thorough as a learning algorithm. Learning algorithms learn by examples (like humans do) automatically, and they generalize based on what they learn (inductive learning). Generalization is a key aspect of evaluating the performance of a learner. At the highest level, the most popular learning algorithms can be categorized into supervised and unsupervised types and each into high-level useful categories (also called data mining functions): 
Supervised learning includes:
  • Classification: Predicting to which discrete class an entity belongs (binary classification is used the most)—e.g., whether a customer will be high-risk.
  • Regression: Predicting continuous values of an entity’s characteristic—e.g., how much an individual will spend next month on his or her credit card, given all other available information.
  • Forecasting: Estimation of macro (aggregated) variables such as total monthly sales of a particular product.
  • Attribute Importance: Identifying the variables (attributes) that are the most important in predicting different classification or regression outcomes.
Unsupervised learning includes:
  • Clustering: Finding natural groupings in the data.
  • Association models: Analyzing “market baskets” (e.g., novel combinations of the products that are often bought together in shopping carts).

Statistical Learning Theory

Historically, statisticians have been skeptics of machine learning and resistant to accepting it.  This has been because of the liberal approach of ML and less emphasize on theoretical proofs. The good news is that "Statistical Learning Theory" has bridged the gap and has provided an umbrella theory where both sides can collaborate and operate. Basic statistical concepts is a cornerstone of many engineering and science fields, very much like math is. But sticking to traditional statistics thinking and practices would have prevented progress. These are two different things and ML has proved that in practice. For those interested to understand a bit about Statistical Learning Theory and its relation to ML, see  lecture by Yaser S. Abu-Mostafa at Cal Tech.

Views: 38428


You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Jin Li on September 27, 2017 at 5:22pm

The statement 'Redundancy in features (variables) is okay, and often helpful' could be misleading. Feature (variable) selection was found to be important to improve predictive accuracy. Please see the brief review in the introduction and further discussion in the reference below for details.


Comment by Thomas Lincoln on November 20, 2016 at 6:55am

Ah, statistics is very useful in linear, repeatable, scientific analysis, as science deals with the environment and very stable relationships.  ML and other associative, non linear types of analysis and predictions are more applicable to human behaviors, which are not linear, not always repeatable, and have very fat tails of non standard behaviors.  That is where the math models of traditional economics traditional rational assumptions and non standard non traditional theories of behavioral economics/finance don't intersect very well.  That is where the generalizations of ML work better with human behaviors versus the traditional statistics of the environment. 

Comment by Khosrow Hassibi on November 4, 2016 at 2:04pm


Thanks for the comments. 

(1) Per what I wrote, the fact is that traditional statistical (descriptive and inferential) ideas originally developed for dealing with small datasets in the days of mechanical adding machines. However, they have remained and will remain relevant even in the world of big data and the techniques/theory is used in many disciplines. There are also new statisticians that have done great recent work in learning theory to take it beyond what is called traditional. You can google traditional statistics and see what you find. It is not my term. (See this one too: https://www.coursera.org/learn/real-life-data-science/lecture/nR3sM...).

(2) When I mentioned "Clustering", "Forecasting", and "regression", I do not mean any particular algorithm in clustering, forecasting, or regression (like logistic or linear regression).  The list I put there is "General Functions" that was proposed by the software industry more than a decade ago to put some order on handling model interoperability between different systems as related to data mining, specifically PMML (Predictive Modeling Markup Language). Each function like regression or clustering or forecasting could be implemented by tens of different algorithms where some came of statistics and many others out of it.    

Comment by Khurram on November 4, 2016 at 11:13am

Hi Palu,

How can you claim your statement is valid to void Traditional Statistics terms?

"Is there something called traditional machine learning? If the answer is yes,  then the title is meaningful. However if the answer is no, then it is meaningless."

Author described  whole blend of framework (ML+Statistics) and i don't think so he fabricated any terminology in terms of conveying exactly traditional Statistics without ML

Comment by Sione Palu on October 31, 2016 at 2:52pm

Is there something called traditional machine learning? If the answer is yes,  then the title is meaningful. However if the answer is no, then it is meaningless.

Clustering is statistics.

Forecasting is statistics (prominent researchers here were Box & Jenkin's et al)

Regression is statistics.

There are authors here on Data Science Central who write misleading articles. IMO authors should check their facts before posting here as this is the site where academics & expert analysts frequented to read articles being posted here. Its ok not to check facts, but this will get authors  exposed as uninformed or ignorant.

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service