Boosting is a supervised learning algorithm based on research by Robert Schapire and Yoav Freund. Boosting is a technique for generating and combining multiple classifiers to improve predictive accuracy. It is a type of machine learning meta-algorithm for reducing bias in supervised learning and can be viewed as minimization of a convex loss function over a convex set of functions.
At issue is whether a set of weak learners can create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification and a strong learner is a classifier that is arbitrarily well-correlated with the true classification. Learning algorithms that turn a set of weak learners into a single strong learner is known as "boosting".
The first boosting algorithms created by Schapire and Freund (recursive majority gate formulation and boost by majority) were not adaptive and could not take full advantage of the weak learners. They found success with AdaBoost, a boosting meta-algorithm that can be used with other learning algorithms to improve performance and is adaptive considering subsequent classifiers built are tweaked in favor of those instances misclassified by previous classifiers. Importantly, it is sensitive to noisy data and outliers.
More recent boosting algorithms include:
On the other hand, boosting doesn't always work. When training cases are noisy, boosting can actually reduce classification accuracy. Researchers Phillip Long and Rocco Servedio suggest that many boosting algorithms are flawed because convex potential boosters cannot withstand random classification noise and noisy data sets render results questionable: "...if any non-zero fraction of the training data is mis-labeled, the boosting algorithm tries extremely hard to correctly classify these training examples, and fails to produce a model with accuracy better than 1/2. This result does not apply to branching program based boosters but does apply to AdaBoost, LogitBoost, and others."