Confused with Confusing Confusion Matrix…!!!

Tulsipatro
4 min readSep 15, 2020

The first time I looked at a Confusion matrix I was actually confused. In Spite of learning it in the class almost every time, I could never get the knack of it.The words “True Positive”, “False Positive”, “Type I and Type II error” were always confusing.Finally the day has arrived when I am actually learning it and also writing a blog on it.

A confusion matrix is a simple way to find out how many predictions were correctly predicted by our model.It is used to evaluate the results of a predictive model with a class outcome to see the number of classes that were correctly predicted as their true class.

To understand what’s actually happening inside a confusion matrix, we need to understand True Positives, True Negatives, False Positives and False Negatives.

Confusing again…!!!

Let’s consider a data set where our target variable has two classes ;
class A : Apples and class B : Bananas.

A confusion matrix is just keeping track of
class A correctly predicted as class A,
class A incorrectly predicted as class B,
class B correctly predicted as class B,
class B incorrectly predicted as class A.

Take for example “Apple” becomes our target class,i.e., we intend to know whether our target class “Apple” was predicted as “Apple” or not.

Our target class A is our positive and the other class B is our negative.

True Positive : class A (apple) correctly predicted as class A (apple)
True Negative : class B (banana) correctly predicted as class B (banana)
False Negative: class A (apple) incorrectly predicted as class B (banana)
False Positive : class B (banana) incorrectly predicted as class A (apple)

Our Goal : To get more True than False

How do we organize this for better understanding…???

We draw a matrix grid or a Confusion Matrix.

Here the x-axis is the predictions, y-axis is the actual values.
The diagonal counts shows how many subjects were correctly predicted as their classes. These are the trues for class A and class B.
The confusion matrix helps us calculate several metrics such as accuracy, sensitivity, specificity, precision, recall and F1 score.

Accuracy

It is the ratio of correct predictions to total predictions made. It is often presented as a percentage by multiplying the result by 100.

Precision :

Precision is defined as the number of true positives divided by the number of true positives plus the number of false positives. False positives are cases the model incorrectly labels as positive that are actually negative, or in our example, false positive are the cases where the bananas have been incorrectly predicted as apples.

Precision can be otherwise defined as the no. of True positive divided by the total no. of Actual results.

Recall :

Recall is defined as True Positive divided by the Predicted results.

F1-score :

F1-score can also be described as the harmonic mean of precision and recall.

Type I and Type II errors

Going by the standard definitions,

A Type I error is the rejection of a true null hypothesis (also known as a “false positive” finding or conclusion; example: “an innocent person is convicted”), while a Type II error is the non-rejection of a false null hypothesis (also known as a “false negative” finding or conclusion; example: “a guilty person is not convicted”).

Type I error :
Reject a true Null hypothesis
Serious Error
False positive

For our case, Type I error occurs when our model predicts,
“The fruit as an Apple when actually it is a Banana”,
Type II error occurs when our model predicts,
“The fruit as a Banana when actually it is a Apple”.

To simplify we can say,

When class B(negative) is predicted as class A(positive), Type I error occurs;
When class A (positive) is predicted as class B (negative), Type II error occurs.

I know I am yet to learn a lot about this particular concept.
Will write a blog soon after learning it!

Hope some confusion was cleared.

--

--