Subscribe to DSC Newsletter

How-to use Bag of little bootstraps Methodology to Compute Error Bounds on Machine Learning Tasks

We all know that calculating error bounds on metrics derived from very large data sets has been problematic for a number of reasons. In more traditional statistics one can put a confidence interval or error bound on most metrics (e.g., mean), parameters (e.g., slope in a regression), or classifications (e.g., confusion matrix and the Kappa statistic).

For many machine learning applications, an error bound could be very important. Casson Stallings makes a great point, using an example of a company developing a method of acquiring customers.

Which statement gives a CEO more appropriate information on how to proceed, the answer without the error bound, or with the error bound?

If this is an interesting topic, you can read the full step by step guide on how-to use bag of little bootstraps methodology to compute error bounds on machine learning tasks and access the whole project on Domino

Views: 866

Tags: bootstrap, bootstraps, bounds, data, domino, error, learning, machine, methodology, science

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service