Subscribe to DSC Newsletter

Event Details

Machine Learning

Time: May 13, 2016 at 9am to May 14, 2016 at 5pm
Location: Temple University Center City
Street: 1515 Market Street
City/Town: Philadelphia, PA 19102
Website or Map: http://statisticalhorizons.co…
Phone: 610-642-1941
Event Type: training, seminar
Organized By: Statistical Horizons LLC
Latest Activity: Feb 17, 2016

Export to Outlook or iCal (.ics)

Event Description

Taught byStephen Vardeman, Ph.D.  

Modern researchers increasingly find themselves facing a new paradigm where data are no longer scarce and expensive, but rather abundant and cheap. Both numbers of cases/instances and numbers of variables/features are exploding. This new reality raises important issues in effective data analysis.

Of course, the basic statistical objective–discovery and quantitative description of simple structure–remains unchanged. But new possibilities for applying highly flexible methods (not practical in “small data” contexts) must be reconciled with the inherent sparsity of essentially any data set comprised of a large number of features–and the corresponding danger of overfitting and unwarranted generalization from data in hand. Modern statistical machine methods rationally and effectively address these new realities.

This course first describes and explains the new context, formulates issues that it raises, and points to cross-validation as a fundamental tool for matching method flexibility/complexity to data set information content in predictive problems. Then a variety of modern squared error loss prediction methods (modern regression methods) will be discussed, related to optimal prediction, and illustrated using standard R packages. These will include

  • smoothing methods
  • shrinkage for linear prediction (ridge, lasso, and elastic net predictors)
  • regression trees
  • random forests, and
  • boosting

Next a variety of modern classification methods will be introduced, related to optimal classification, and illustrated using standard R packages. These will include:

  • linear methods for classification (linear discriminant analysis, logistic regression, support vector classifiers)
  • kernel extensions of support vector classifiers
  • classification trees
  • adaboost, and
  • other ensemble classifiers

Finally, we’ll discuss some methods of modern “unsupervised” statistical machine learning, where the object is not prediction of a particular response variable but rather discovery of relations among features or natural groupings of either cases or features. These will include principal components and clustering methods

The course will consist of both lectures and hands-on R sessions.

Comment Wall

Comment

RSVP for Machine Learning to add comments!

Join Data Science Central

Attending (1)

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service