1

I'm using three pre-trained deep learning models to detect vehicles and count from an image data set. The vehicles belong to one of these classes ['car', 'truck', 'motorcycle', 'bus']. So, for a sample I have manually counted number of vehicles in each image. Also, I employed the three deep learning models and obtained the vehicle counts. For example:

    Actual        | model 1 count| model 2 count  | model 3 count 
------------------------------------------------------------------
    4 cars, 1 bus | 2 cars       | 2 cars, 1 truck| 4 cars
    2 cars        | 0            | 1 truck        | 1 car, 1 bus

In this case, how can I measure accuracy scores such as precision and recall?

nbro
  • 39,006
  • 12
  • 98
  • 176

1 Answers1

2

Precision is the number of true positives over the number of predicted positives(PP), and recall is the number of true positives(TP) over the number of actual positives(AP) you get. I used the initials just to make it easier ahead.

A true positive is when you predict a car in a place and there is a car in that place.

A predicted positive is every car you predict, being right or wrong does not matter.

A actual positive is every car that actually is in the picture.

You should calculate these values separately for each category, and then sum over the examples you sampled, if I am not mistaken.

So for the CAR category you have (assuming the predictions do match with the target, i.e., you are not predicting a truck as a car for example) :

model 1
line 1  -> 2 TP, 2 PP, 4 AP 
line 2  -> 0 TP, 0 PP, 2 AP

So in total precision is 2/2 = 1 and recall is 2/6 = 0.3(3).

You can then do the same for the other categories, and for the other models. This way you can check if a model is predicting one category better than the other. For example, model 1 can be better at finding cars in a picture whilst model 3 can be better at finding buses.

The important part is that you know if the objects the model predicted actually correspond to what is in the picture. A very unlikely example would be a picture with 1 car and 1 truck where the algorithm recognizes the car as a truck and the truck as a car. From the info that is in the table I cannot be sure if the 2 cars you predict are the actual cars in the picture, or in other words, if they are actually True Positives or are actually False Positives.

Miguel Saraiva
  • 767
  • 1
  • 5
  • 14
  • 1
    Thanks a lot for your answer!. Yes, it is more complex than I thought. For example, there is one model which draws two bounding boxes over one car. However, your answer is so helpful! – Nilani Algiriyage Dec 28 '19 at 09:37
  • 1
    @NilaniAlgiriyage that would make one box with higher IoU with ground truth the true positive and the other box false positive. You'd want your model to give better confidence to this TP box – deadcode Dec 28 '19 at 10:33