Confusion Matrix

Original article was published on Artificial Intelligence on Medium

In many fields such as medicine, software engineering and marketing, when a machine learning model is built for a particular purpose, it is important to evaluate the model. One way to evaluate a model is with the use of the confusion matrix.

The variables in the confusion matrix can be tricky. Hence, I’ll attempt to explain in the simplest way possible.

Let’s say we are looking to test for a disease called ‘Acute Syndrome’ (AS).

The variables for the confusion matrix are described within the context of the disease AS:

True Positive (TP): A patient tested positive to AS and they actually have AS.

True Negative (TN): A patient tested negative to AS and they do not have AS. As a patient, this is the best place to be :).

False Positive (FP) (also known as Type 1 error): A patient tested positive to AS but the test result is wrong as the patient does not have AS.

False Negative (FN) (also known as Type 2 error): A patient tested negative to AS but the test result is wrong as the patient actually has AS.

Lets put this into context with some data. We assume that we have a random sample of patients.

1 indicates that a patient has AS, and 0 indicates that a patient does not have AS. Actual and predicted cases are presented below.

actual = [1,1,0,1,0,0,1,0,0,0]

predicted = [1,0,0,1,0,0,1,1,1,0]

The confusion matrix below is dervied from the actual and predicted cases.

For simplicity: TP = 4 , TN = 3, FP = 2 and FN = 1

From the confusion matrix, we can determine measures such as accuracy, precision, recall, F1-score, and the AUC-ROC curve.

Accuracy: Measures the proportion of predictions that the model got right. The formula is given below.

Number of correct predictions/Total number of predictions

TP + TN/TP +TN +FP + FN

From our data: Accuracy = 7/10 = 0.7

True Positive Rate or Recall or Sensitivity: Measures the proportion of actual positives that are identified as such.

TPR= TP/P = TP/(TP + FN)

From our data: TPR = 4/ 5 = 0.8

False Positive Rate: Measures the proportion of actual negatives wrongly predicted as positives.

FPR = FP/N = FP/(FP + TN)

From our data: FPR = 2/5 = 0.4

Precision: What is the proportion of correctness when the model predicts positive.

TP/(FP + TP).

From our data: Precision = 4 / 6 = 0.66

F1 Score: This is the harmonic mean between precision and sensitivity.

2*recall*precision/(precision + recall)

From our data: F1 score = 2 * 0.8*0.66 / 0.66 + 0.8 = 0.72

I hope that I have been able to provide you with a basic idea of the confusion matrix and some measures that can be derived from it. If you enjoyed reading any part of this blog, a clap would be appreciated for some motivation.