In the previous parts of our tutorial we discussed:

- Basic notation used in assessing classification models
- Quantitative quality indicators
- Confusion Matrix

In this fourth part of the tutorial we will discuss the ROC curve.

**What is the ROC curve?**

The ROC curve is one of the methods for visualizing classification quality, which shows the dependency between TPR (True Positive Rate) and FPR (False Positive Rate).

The more convex the curve, the better the classifier. In the example below, the „green” classifier is better in area 1, and the „red” classifier is better in area 2.

**How is the ROC curve created**

** **

- We compute the values of the decision function.
- We test the classifier for different alpha thresholds. Recall that alpha is the threshold of the estimated probability, above which an observation is assigned to one category (positive class) and below to the other category (negative class).
- For each classification with one value of the alpha threshold we obtain a (TPR, FPR) pair, which corresponds to one point on the ROC curve.
- For each classification with one value of the alpha threshold we also have the corresponding Confusion Matrix.

Example:

The quality of classification can be determined using the ROC curve by calculating the:

- area under ROC Curve (AUC) coefficient

The higher the value of AUC coefficient, the better. AUC = 1 means a perfect classifier, AUC = 0.5 is obtained for purely random classifiers. AUC < 0.5 means the classifier performs wor

se than a random one.

- Gini Coefficient: GC = 2 *AUC – 1 (the classifier’s advantage over a purely random one)

The higher the value of GC, the better. GC = 1 denotes a perfect classifier, GC = 0 denotes a purely random one.

** **

Interested in similar content? Sign up for Newsletter

You can follow us at @Algolytics

© 2020 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central