This article was written by Jason Brownlee. Jason is the editor-in-chief at MachineLearningMastery.com.He has a Masters and PhD in Artificial Intelligence, has published books on Machine Learning and has written operational code that is running in production.
After you make predictions, you need to know if they are any good.
There are standard measures that we can use to summarize how good a set of predictions actually are.
Knowing how good a set of predictions is, allows you to make estimates about how good a given machine learning model of your problem,
In this tutorial, you will discover how to implement four standard prediction evaluation metrics from scratch in Python.
After reading this tutorial, you will know:
Let’s get started.
You must estimate the quality of a set of predictions when training a machine learning model.
Performance metrics like classification accuracy and root mean squared error can give you a clear objective idea of how good a set of predictions is, and in turn how good the model is that generated them.
This is important as it allows you to tell the difference and select among:
As such, performance metrics are a required building block in implementing machine learning algorithms from scratch.
This tutorial is divided into 4 parts:
These steps will provide the foundations you need to handle evaluating predictions made by machine learning algorithms.
To check out all this information, click here.
Top DSC Resources