Resampling is a way to reuse data to generate new, hypothetical samples (called *resamples*) that are representative of an underlying population. It's used when:

- You don't know the underlying distribution for the population,
- Traditional formulas are difficult or impossible to apply,
- As a substitute for traditional methods.

Two popular tools are the bootstrap and jackknife. Although they have many similarities (e.g. they both can estimate precision for an estimator θ), they do have a few notable differences.

**Bootstrapping** is the most popular resampling method today. It uses sampling with replacement to estimate the sampling distribution for a desired estimator. The main purpose for this particular method is to **evaluate the variance of an estimator.** It does have many other applications, including:

- Estimating confidence intervals and standard errors for the estimator (e.g. the standard error for the mean),
- Estimating precision for an estimator θ,
- Dealing with non-normally distributed data,
- Calculating sample sizes for experiments.

Bootstrapping has been shown to be an excellent method to estimate many distributions for statistics, sometimes giving better results than traditional normal approximation. It also works well with small samples. It doesn't perform very well when the model isn't smooth, is not a good choice for dependent data, missing data, censoring, or data with outliers.

The Jackknife works by sequentially deleting one observation in the data set, then recomputing the desired statistic. It is computationally simpler than bootstrapping, and more orderly (i.e. the procedural steps are the same over and over again). This means that, unlike bootstrapping, it can theoretically be performed by hand. However, it's still fairly computationally intensive so although in the past it was common to use by-hand calculations, computers are normally used today. One area where it doesn't perform well for non-smooth statistics (like the median) and nonlinear (e.g. the correlation coefficient).

The main application for the Jackknife is to **reduce bias and evaluate variance for an estimator.** It can also be used to:

- Find the standard error of a statistic,
- Estimate precision for an estimator θ.

To sum up the differences, Brian Caffo offers this great analogy: "*As its name suggests, the jackknife is a small, handy tool; in contrast to the bootstrap, which is then the moral equivalent of a giant workshop full of tools."*

Some specific differences:

- The bootstrap
**requires a computer and is about ten times more computationally intensive**. The Jackknife can (at least, theoretically) be performed by hand. - The bootstrap is
**conceptually simpler**than the Jackknife. The Jackknife requires*n*repetitions for a sample of n (for example, if you have 10,000 items then you'll have 10,000 repetitions), while the bootstrap requires "B" repetitions. This leads to a choice of B, which isn't always an easy task. A general rule of thumb is that B = 1,000 unless you have access to a large amount of computing power. - In most cases (see Efron, 1982), the Jackknife
**doesn't perform as well**the Bootstrap. - Bootstrapping introduces a "cushion error", an
**extra variation**source, due to the finite resampling of size B. Note that the cushion error is reduced for large B sizes or where only biased sets of bootstrap samples are used (called*b*-bootstrap). - The Jackknife is
**more conservative**than bootstrapping, producing slightly larger estimated standard errors. - The Jackknife gives the
**same results**every time, because of the small differences between replications. The bootstrap gives**different results**each time that it's run. - The Jackknife tends to
**perform better**for confidence interval estimation for**pairwise agreement measures.** - Bootstrapping performs better for
**skewed distributions**. - The Jackknife is more suitable for
**small original data samples.**

Efron, B. (1982), "The Jackknife, the Bootstrap, and Other Resampling Plans," SIAM, monograph #38, CBMS-NSF.

Bootstrapping, jackknifing and cross validation. Reusing your data

Evaluation of Jackknife and Bootstrap for Defining Confidence Inter...

The Bootstrap and Jackknife Methods for Data Analysis

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central