Skip to content Skip to sidebar Skip to footer

Calculate ROC Curve, Classification Report And Confusion Matrix For Multilabel Classification Problem

I am trying to understand how to make a confusion matrix and ROC curve for my multilabel classification problem. I am building a neural network. Here are my classes: mlb = MultiLab

Solution 1:

From v0.21 onwards, scikit-learn includes a multilabel confusion matrix; adapting the example from the docs for 5 classes:

import numpy as np
from sklearn.metrics import multilabel_confusion_matrix
y_true = np.array([[1, 0, 1, 0, 0],
                   [0, 1, 0, 1, 1],
                   [1, 1, 1, 0, 1]])
y_pred = np.array([[1, 0, 0, 0, 1],
                   [0, 1, 1, 1, 0],
                   [1, 1, 1, 0, 0]])

multilabel_confusion_matrix(y_true, y_pred)
# result:
array([[[1, 0],
        [0, 2]],

       [[1, 0],
        [0, 2]],

       [[0, 1],
        [1, 1]],

       [[2, 0],
        [0, 1]],

       [[0, 1],
        [2, 0]]])

The usual classification_report also works fine:

from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred))
# result
              precision    recall  f1-score   support

           0       1.00      1.00      1.00         2
           1       1.00      1.00      1.00         2
           2       0.50      0.50      0.50         2
           3       1.00      1.00      1.00         1
           4       0.00      0.00      0.00         2

   micro avg       0.75      0.67      0.71         9
   macro avg       0.70      0.70      0.70         9
weighted avg       0.67      0.67      0.67         9
 samples avg       0.72      0.64      0.67         9

Regarding ROC, you can take some ideas from the Plot ROC curves for the multilabel problem example in the docs (not quite sure the concept itself is very useful though).

Confusion matrix and classification report require hard class predictions (as in the example); ROC requires the predictions as probabilities.

To convert your probabilistic predictions to hard classes, you need a threshold. Now, usually (and implicitly), this threshold is taken to be 0.5, i.e. predict 1 if y_pred > 0.5, else predict 0. Nevertheless, this is not necessarily the case always, and it depends on the particular problem. Once you have set such a threshold, you can easily convert your probabilistic predictions to hard classes with a list comprehension; here is a simple example:

import numpy as np

y_prob = np.array([[0.9, 0.05, 0.12, 0.23, 0.78],
                   [0.11, 0.81, 0.51, 0.63, 0.34],
                   [0.68, 0.89, 0.76, 0.43, 0.27]])

thresh = 0.5

y_pred = np.array([[1 if i > thresh else 0 for i in j] for j in y_prob])

y_pred
# result:
array([[1, 0, 0, 0, 1],
       [0, 1, 1, 1, 0],
       [1, 1, 1, 0, 0]])

Post a Comment for "Calculate ROC Curve, Classification Report And Confusion Matrix For Multilabel Classification Problem"