Compute Classification Report and Confusion Matrics in Python


Introduction

In machine learning, classification problems are one of the most widely seen problems, where machine learning models are built to classify several categories of the target variables. However, the classification report and confusion matrics are used in order to evaluate the performance of the model and to check where the model is making mistakes.

In this article, we will discuss the classification report and confusion matrics, what they are, how we can use them, and what their interpretation is by calculating the same code examples in Python. This article will help one to clear an idea about these evaluation methods of the model and will help one to use the same to evaluate the model when working on the classification models.

Before we directly jump into the codes and interpretation, let us discuss a bit about the classification reports and confusion matric and the basic intuition behind them.

What is Confusion Metrics?

Confusion matrics in classification is a type of table or matrics, which includes several values according to the predicted and actual category of the target variables.

It mainly includes the True Positive, True Negative, False Positive, and False Negative.

Let us discuss these categories one by one.

True Positive: The true positive basically means that the model has been predicted truly, and the predicted value is positive. Basically, it is the case where the model does not make any mistakes and correctly predicts the positive value.

True Negative: The true negative means that the model has predicted correctly for negative values. It is the case where the model does not make any mistakes and predicts the negative value for the negative instance in actuality.

False Positive: It is the case where the model makes a mistake. When the actual value is negative, and the model predicts positive, the case is known as the false positive value. These errors are also known as the type 1 error.

False Negative: It is the case where the actual value is positive, and the model incorrectly predicts that the value is negative. These errors are also known as type 2 errors.

These values are one of those values which generally signify the performance of the model in different ways on the basis of what the model has predicted and what is the actual observation. Using these values, various measurements can be done, which can help us determine the performance of the model in different ways.

Mainly Accuracy, Precision, Recall, and F Beta score are calculated with the help of these values. Let us discuss them one by one.

Accuracy

It is the measure of the model's correct prediction with respect to all the predictions that the model has made. It is calculated by taking the ratio of the true prediction of the model and the total predictions done by the model.

Accuracy = TP + TN / TP + TN + FP + FN

Precision

Precision is the measure of correctly predicted positive values out of all positive values that are predicted.

Precision = TP / TP + FP

Recall

Recall is the measure of correctly predicted positive values out of actual positive values. It is also known as the true positive rate.

Recall = TP / TP + FN

F beta Score

The F beta score is the measure of precision and recall, both in some proportion. Here, the balance is maintained between precision and recall, and the beta parameter is used to give more weight to either precision or recall.

F Beta Score = (1+Beta^2) Precision * recall / (Beta)^2 Precision + Recall

What is Classification Report?

The classification report, as the name suggests, is a type of report that includes the various parameters of the model that are basically the evaluation measures for each category of the target variable.

The classification report generally includes four parameters, Precision, Recall, F1 Score, and Support.

The precision and recall, as discussed above, are the rates of correctly predicted positive values by the model to all positive predicted values, and the recall is the measure of correctly predicted positive values to all actual positive values.

The F1 score is the F beta score, where the beta is taken as 1. Here the beta is taken 1; the equal weightage will be given to the precision and re4call.

Support is the parameter that tells the instances of each category in the target variable. In simple language, it is the measure of occurrence or the number of observations of a particular category in the target variables.

Example for Computing Classification Report and Confusion Matrices

Now let us try to calculate the confusion matrics and the classification report with the help of Python by taking a dataset and training a model onto that.

Here we will generate a dummy dataset with 200 observations, which will contain a target variable with 0 and 1 classes, which is basically a classification problem.

Example

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix

# Generating a random dataset with 200 rows
np.random.seed(0)
X = np.random.rand(200, 5)
y = np.random.randint(2, size=200)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, 
random_state=42)

# Train the model
model = LogisticRegression()
model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = model.predict(X_test)

from sklearn.metrics import accuracy_score

# Calculating the accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)

# Calculating the classification report
c_report = classification_report(y_test, y_pred)
print("Classification Report:\n", c_report)

# Calculating the confusion matrix
con_m = confusion_matrix(y_test, y_pred)
print("Confusion Matrix:\n", con_m)

Output

Accuracy: 0.6
Classification Report:
            precision   recall  f1-score   support

         0      0.62     0.73     0.67      33
         1      0.57     0.44     0.50      27

   accuracy                     0.60      60
   macro avg      0.59     0.59     0.58      60
weighted avg      0.60     0.60     0.59      60

Confusion Matrix:
 [[24  9]
 [15 12]]

The above code returns three results, accuracy of the model, the confusion matrix, and the classification report.

The model can be evaluated well with the help of accuracy, precision, recall, and F1 scores and the micro and macro averages that are included in the classification reports, which is the measure of the averages of precision, recall, and f1 scores of different categories of the target variable.

Conclusion

In this article, we discussed the classification report and confusion matrics, what they are, how they are used, and what is their importance with, discussing all the terminologies related to them and the code example calculating the same. This article will help one to understand these matrics well and use them wherever necessary.

Updated on: 17-Aug-2023

288 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements