Subscribe to DSC Newsletter

Forecasting with the Baum-Welch Algorithm and Hidden Markov Models

Leonard Baum and Lloyd Welch designed a probabilistic modelling algorithm to detect patterns in Hidden Markov Processes. They built upon the theory of probabilistic functions of a Markov Chain and the Expectation–Maximization (EM) Algorithm - an iterative method for finding maximum likelihood or maximum a-posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables.

The Baum–Welch Algorithm initially proved to be a remarkable code-breaking and speech recognition tool but also has applications for business, finance, sciences and others. The algorithm finds unknown parameters of a Hidden Markov Model: the maximum likelihood estimate of the parameters of a Hidden Markov Model given a set of observed feature vectors.

Two step process:

1. computing a-posteriori probabilities for a given model; and
2. re-estimation of the model parameters.

A Markov process models a sequence of events that have no direct relationship. A Hidden Markov Model is a probabilistic model of the joint probability of a collection of random variables. Hidden Markov Models provide a simple and effective frame-work for modelling time-varying spectral vector sequences.

A Hidden Markov Process models a system that depends on an underlying Markov process with unknown parameters. This provides useful information about a random sequence of events.

The Baum–Welch Algorithm and and Hidden Markov Models are used successfully for financial trading systems, predicting market trends, workforce planning, fraud detection, supply chain optimization, forecasting supply and demand, financial time series prediction and anomaly detection in network traffic activity.

With enough data and compute power, the Baum–Welch Algorithm and Hidden Markov Models can provide probabilities about a process and predict future events.

See: http://bit.ly/1kXsipc

Views: 4506

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by william e winkler on March 6, 2014 at 2:38pm

Here is a well-received tutorial on the EM and hidden Markov by Jeff Bilmes.

http://lasa.epfl.ch/teaching/lectures/ML_Phd/Notes/GP-GMM.pdf 

Variants of hidden Markov have been used for very nontrivial types of data restructuring and clean-up as well of tutorials by Andrew Mccallum and William Cohen at various ACM and IEEE conferences.

Borkar, V., Deshmukh, K., and Sarawagi, S. (2001), “Automatic Segmentation of Text into Structured Records,” Association of Computing Machinery SIGMOD 2001, 175-186.

Cohen, W. W., and Sarawagi, S. (2004), “Exploiting Dictionaries in Named Entity Extraction: Combining Semi-Markov Extraction Processes and Data Integration Methods,” Proceedings of the ACM Knowledge Discovery and Data Mining Conference 2005, 89-98.

Agichstein, E., and Ganti, V. (2004), “Mining Reference Tables for Automatic Text Segmentation,” ACM Knowledge Discovery and Data Mining Conference 2004, 20-29.

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2017   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service