Metrics for Evaluation of Machine Learning Algorithms

Original article was published on Artificial Intelligence on Medium


Metrics for Evaluation of Machine Learning Algorithms

After processing the data and training the model the next step is to check how effective the model is. Different performance metrics are used to evaluate different Machine Learning Algorithms:

Accuracy

Accuracy is a good measure when the target variable classes in the data are nearly balanced. Accuracy is a relevant measure for binary classifier. For a binary classifier that classifies instances into positive (1) and negative (0) instances, any single prediction can fall in to one of four terms in the below.

a. True Positives (TP): True positives are the cases when the actual class of the data point was 1(True) and the predicted is also 1(True)

b. True Negatives (TN): True negatives are the cases when the actual class of the data point was 0(False) and the predicted is also 0(False)

c. False Positives (FP): False positives are the cases when the actual class of the data point was 0(False) and the predicted is 1(True). False is because the model has predicted incorrectly and positive because the class predicted was a positive one.

d. False Negatives (FN): False negatives are the cases when the actual class of the data point was 1(True) and the predicted is 0(False). False is because the model has predicted incorrectly and negative because the class predicted was a negative one.

In the Numerator, are our correct predictions (True positives and True Negatives)(Marked as red in the fig above) and in the denominator, are the kind of all predictions made by the algorithm (Right as well as wrong ones).

Precision

Precision tells what measure of positive predictions i.e. prediction of 1(True) were actually positive.

Recall

Recalls tells us what proportion of actual positive data points i.e. 1(True) were predicted as positive i.e. prediction of 1(True)