Subscribe to DSC Newsletter

A Tour of Machine Learning Algorithms

Originally published by Jason Brownlee in 2013, it still is a goldmine for all machine learning professionals.  The algorithms are broken down in several categories. Here we provide a high-level summary, a much longer and detailed version can be found here. You can even download an algorithm map from the original article. Below is a much smaller version.

It would be interesting to list, for each algorithm,

  • examples of real world applications,
  • in which contexts it performs well,
  • if it can be used as a black box,
  • ease of use and interpretation,
  • how it handles missing data,
  • enterprise version available or not,
  • integration with existing analytics platforms or real-time systems,
  • constraints on data (e.g. Naive Bayes performs poorly on correlated variables),
  • maintenance/scalability issues,
  • distributed implementation,
  • speed or computational complexity,
  • can easily be blended with other algorithms

and generally speaking, compare these algorithms. I would add HDT, Jackknife regression, density estimation, attribution modeling (to optimize marketing mix), linkage (in fraud detection), indexation (to create taxonomies or for clustering large data sets consisting of text), bucketisation, and time series algorithms.

For more on machine learning (ML), click here.

Ensemble methods to fit data: see original paper

1. Regression Algorithms

  • Ordinary Least Squares Regression (OLSR)
  • Linear Regression
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

2. Instance-based Algorithms

  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)
  • Locally Weighted Learning (LWL)

3. Regularization Algorithms

  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net
  • Least-Angle Regression (LARS)

4. Decision Tree Algorithms

  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5 and C5.0 (different versions of a powerful approach)
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • M5
  • Conditional Decision Trees

5. Bayesian Algorithms

  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

6. Clustering Algorithms

  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

7. Association Rule Learning Algorithms

  • Apriori algorithm
  • Eclat algorithm

8. Artificial Neural Network Algorithms

  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Radial Basis Function Network (RBFN)

9. Deep Learning Algorithms

  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

10. Dimensionality Reduction Algorithms

  • Principal Component Analysis (PCA)
  • Principal Component Regression (PCR)
  • Partial Least Squares Regression (PLSR)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit
  • Linear Discriminant Analysis (LDA)
  • Mixture Discriminant Analysis (MDA)
  • Quadratic Discriminant Analysis (QDA)
  • Flexible Discriminant Analysis (FDA)

11. Ensemble Algorithms

  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Gradient Boosted Regression Trees (GBRT)
  • Random Forest

12. Other Algorithms

  • Computational intelligence (evolutionary algorithms, etc.)
  • Computer Vision (CV)
  • Natural Language Processing (NLP)
  • Recommender Systems
  • Reinforcement Learning
  • Graphical Models

DSC Resources

Additional Reading

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge

Views: 85303

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Ben Dutta on December 29, 2015 at 2:58pm

I was hoping to see '(Multiple) Correspondence Analysis' under the list of "Dimensionality Reduction" methods.

Comment by Mahesh N Sanil on December 19, 2015 at 8:27am

Its good and explained well, clustered algorithm.  Love reading and understanding the clusters.

Comment by Anil kumar on October 29, 2015 at 9:19pm

Really helpful....

Comment by Sione Palu on October 28, 2015 at 8:12pm

Non-linear "Dimensionality Reduction" is missing from #10.

Comment by Dean Abbott on October 22, 2015 at 10:18am

I love these kinds of lists! Nicely done. Biggest surprise? The nice list of dimensionality reduction methods (that includes the Sammon Projection). 

But, I have to make a couple quibbles:

1) back-propagation is an algorithm to fit a multi-layered perceptron (MLP) neural network. Other algorithms to find the weights include  QuickProp, R-Prop, Conjugate Gradient, Levenberg-Marquardt. "back-prop" should be changed to MLP.

2) Kohonen SOM

I wouldn't put this in instance based algorithms at all; it's really a clustering algorithm very much like K-Means clustering. It's related to instance based algorithms in the same way K-Means is related to k-NN -- they typically use the same distance metric (Euclidean distance). But Kohonen is an unsupervised method (like K-Means).

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2017   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service