In the previous parts of our tutorial we discussed:
In this fourth part of the tutorial we will discuss the ROC curve.
What is the ROC curve?
The ROC curve is one of the methods for visualizing classification quality, which shows the dependency between TPR (True Positive Rate) and FPR (False Positive Rate).
The more convex the curve, the better the classifier. In the example below, the „green” classifier is better in area 1, and the „red” classifier is better in area 2.
How is the ROC curve created
Example:
The quality of classification can be determined using the ROC curve by calculating the:
The higher the value of AUC coefficient, the better. AUC = 1 means a perfect classifier, AUC = 0.5 is obtained for purely random classifiers. AUC < 0.5 means the classifier performs wor
se than a random one.
The higher the value of GC, the better. GC = 1 denotes a perfect classifier, GC = 0 denotes a purely random one.
Interested in similar content? Sign up for Newsletter
You can follow us at @Algolytics
Posted 1 March 2021
© 2021 TechTarget, Inc.
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central