# .

Hundreds of years of statistical analysis by tens of thousands of independent researchers has led to an array of similar sounding names, many of which are often confused. Here are 11 of the most common. Which ones are you guilty of confusing?

1. Chi Square Test for Independence and Chi Square Goodness of Fit Test

To avoid confusion, don't refer to a "chi-square test" [no term], because there are two of them. Both tests use the chi-square statistic and distribution, but they do completely different things.

• A chi-square test for independence is a way to find a relationship between two sets of data.
• A chi-square Goodness of fit test fits a single categorical variable to a probability distribution. Both tests use the chi-square statistic and distribution.

2. Covariate vs. Covariance

A covariate is a variable observed during analysis, but it's not the focus of analysis and we can't control for it. "Covariate” can be confused with covariance, which has an entirely different meaning: It's a measure of how much two random variables vary together.

3. Sample Space and Event Space

Oft confused are these two types of spaces that describe probability outcomes. They are similar in a way, but if you want to be mathematically precise, make sure you know the difference between the two:

• A sample space of an experiment contains all possible outcomes.
• The event space contains all sets of outcomes (all subsets of the sample space).

4. False Alarm Ratio vs. False Alarm Rate

A false alarm ratio (FAR) is the number of false alarms per total number of alarms (where a false alarm is given when there's actually not a given event). Unfortunately, the False Alarm Rate (FAR) shares the same acronym, but it has a slightly different meaning: it's the number of times an event didn't happen.

5. Inverse Normal Distribution vs. Inverse Gaussian Distribution

Another name for a Gaussian Distribution is a Normal Distribution. However, the Inverse Gaussian Distribution is not another name for the Inverse Normal  Distribution. The inverse Gaussian (which isn't really an "inverse") is a family of continuous probability distributions. On the other hand, the Inverse Normal Distribution is an inverse--it allows you to work backwards to find x-values.

6. Pearson's Coefficient vs. Pearson's Coefficient of Skewness

Pearson's Coefficient [no term] tells you whether two data sets or variables are independent or dependent of each other. Pearson's Coefficient of Skewness is a way to find skew in a sample. To avoid confusion between the two, many people called Pearson's Coefficient the contingency coefficient instead.

7. Positive Predictive Value vs. Sensitivity

The confusion here doesn't stem from the names, but rather what the two terms represent. The Positive Predictive Value tells you the odds of having a disease if get a positive result. Should you panic over that positive Covid-19 test? PPV can be a useful piece of information.  On the other hand, the test sensitivity of a test is the proportion of people with the disease who will have a positive result. That little nugget isn't so useful to you, the consumer, but it's useful information for the CDC.

8. Prediction Interval vs. Confidence Interval

Although they are often confused with each other because of their subtle difference, prediction and confidence intervals are not the same thing. Confidence intervals give you a range of values for a population parameter (like the variance). A prediction interval is where you would expect to find a future value. To muddy the waters further, we also have tolerance levels, which cover a particular proportion of the population for a given confidence level. As an example, let's say your factory was producing light bulbs:

• Confidence Interval: I'm 99% confident the light bulbs have a mean life of  between 200 and 220 hours.
• Prediction Interval: 95% of the time, light bulbs we produce in the future will have a mean life of 200 and 220 hours.
• Tolerance Interval: 75% of the time, light bulbs will have a mean life of between 200 and 220 hours (with 95% confidence).

9. Probable vs. Plausible

"Plausible" (which basically means reasonable) and "probability" are often interchanged. For example, consider these statements:

“If the probability is large then the null is plausible and we cannot reject the null hypothesis”  

“On the other hand, if we report a range of plausible values – a confidence interval – we have a good shot at capturing the parameter.” ~ 

Due to the ambiguous language, it's not clear here if plausible means "probable" or "reasonable". To muddle the situation even further, "likelihood" (how well data summarizes parameters.) is also confused with plausible.

10. Relative Absolute Error vs. Relative Error

Relative Absolute Error measures the performance of a predictive model. It has a completely different meaning from relative error, which is a general measure of precision or accuracy for instruments like clocks, rulers, or scales.

11. Reverse Causality vs. Similarity

The definitions for these two terms are very close:

• Simultaneity: X causes changes in Y and Y causes changes in X,
• Reverse Causality: Y causes changes in X.

This illustration shows how they relate to each other: References

 Rao, S. (n.d.). Understanding Hypothesis Tests.

 Libretexts. Confidence Intervals.

Reverse Causality image by author.

Covariates controlled for in ANCOVA. Image: Makingstats|Wikimedia Commons

Views: 227

Comment

### You need to be a member of Data Science Central to add comments!

Join Data Science Central Comment by Martis Herto on March 31, 2021 at 11:21am

Hey, the logistics performance index of Denmark is 3.78. It indicates a good performance - the logistics system is well prepared and organised, shipments mostly arrive on time and do not suffer damage, and the infrastructure is ready to handle even unpredictably big amounts of traffic as long as it is not overwhelming.
http://www.confiduss.com/en/jurisdictions/denmark/infrastructure/