-1

I am running some evaluation metrics using the YOLOv5 object detection algorithm, and wish to calculate my true positives and false positives. For instance, the evaluation metric outputs are as follows:

   Class          Images    Labels     Prec     Recall     mAP@.5     mAP@.5:.95: 
     all          100         36      0.444      0.702      0.481      0.223
 Class 1          50          29      0.588      0.689      0.668      0.333
 Class 2          50           7      0.301      0.714      0.293      0.113

Looking at this source, I found that you could calculate the true positives and false positives with the following equations:

#Computed for Class 1

TP = Recall * Labels = 34.45 ≈ 34
FP = (TP / Precision) - TP = 23.82 ≈ 24

I am new to evaluation metrics, so at first glance, I'm thinking that the false positive number is fairly high. Is this the correct formula to compute the true positives and false positives? I'm just looking for some verification and some explanation as to why this works, if it does.

nbro
  • 39,006
  • 12
  • 98
  • 176
ihb
  • 129
  • 1
  • 10

1 Answers1

1

Recall is the fraction of the relevant documents that are successfully retrieved. \begin{aligned}{\text{Recall}}&={\frac {tp}{tp+fn}}\,\end{aligned}

Labels for a Class is equal to total examples which are actually belonging to the class: P = FN + TP

Hence (FN + TP)* Recall = TP

Precision is the fraction of retrieved documents that are relevant to the query: \begin{aligned}{\text{Precision}}&={\frac {tp}{tp+fp}}\end{aligned}

Using simple maths you will easily get FP = (TP / Precision) - TP

In your calculation your might have use Images instead of Labels:

TP = Recall * Labels = 19.98 ≈ 20
FP = (TP / Precision) - TP = 14

Reference: https://en.wikipedia.org/wiki/Precision_and_recall

prashant0598
  • 251
  • 1
  • 5
  • 1
    Thanks, this is what I was looking for! You are right, I used "images" instead of labels. – ihb Jan 13 '22 at 17:10