-1

I am working on a computer vision project, based on face detection to record the time spent by a person in an office.

It consists of detecting the face by camera number 1 (input), temporarily storing the detected face, calculating the time spent until this same person leaves and his face is detected by camera number 2. (We don't have a customer database).

Is there a better approach to follow? I would also appreciate articles to read on the topic.

nbro
  • 39,006
  • 12
  • 98
  • 176
  • Could you please put your main **specific** question in the title? "Face recognition..." does not end with `?`, so it's not a question. Moreover, the title doesn't seem to describe your actual problem. You actually have 2 cameras, so not just "from a single image". – nbro Mar 04 '21 at 11:37

1 Answers1

1

Matching 2 image of the same person can be done by help of "Siamese Neural network". Here they compare feature of 2 images and if 2 features distance are very close then it's a match. Good thing about this is you do not need person face to match in database. You can use pre-trained network like deepface and use it to compare. However, I guess you will have more trouble connecting real time camera input. As you have to store camera 1 input so that it can be used later for comparison with camera 2 images.

Rambo_john
  • 26
  • 6