facerec_demo.py confidence issue [closed]

asked 2014-01-07 17:18:45 -0600

Agustin Haller gravatar image

updated 2014-01-08 03:44:59 -0600

berak gravatar image

Hi, i'm trying to run the facerec_demo.py example (from here: https://github.com/Itseez/opencv/blob/2.4/samples/python2/facerec_demo.py), but im getting always

Predicted label = 0 (confidence=0.00)

Can anyone help me understand why i'm getting this? I hope to get a real predicted label and confidence. Thank you very much!

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by sturkmen
close date 2020-10-27 05:39:45.516523

Comments

1

you read the comment in line 126 ?

the demo is testing against the 1st file from the trainset ( a known one ) , so that's the expected outcome.

in real life, you would train with known faces and test with unknown ones.

berak gravatar imageberak ( 2014-01-08 03:46:59 -0600 )edit

Thanks!, how do I interpret the confidence? If the person is in the trained model then the confidence is near 0, and if not is higher (for example 2567.02). What i mean is how can i translate this to a X% of confidence? Do i explain myself? Thank you again!

Agustin Haller gravatar imageAgustin Haller ( 2014-01-09 19:38:25 -0600 )edit
1

'confidence' might be a misnomer here. it's actually the euclid. distance to the next nearest face found in the db. ( that again might explain the 0.0 value in the demo ).

one use of it is to determine a heuristic threshold value, - if e.g. most of your false predictions lie over a certain value here, you'd feed that into the threshold value in the constructor, and they will get culled off ( label = -1 ) in the next runs.

unfortunately, to get it into a [0..1] range (for a more real 'confidence') is a bit difficult. it's probably like 1-distance/(numeigenvecs255) in the eigenfaces case, or 1-distance/(numpatches*2 * 255) in the lbph case, i'm just making that up here, to show you that all facerecognizer classes have different feature-spaces (which makes this task complicated).

berak gravatar imageberak ( 2014-01-10 04:07:27 -0600 )edit
1

if you're interested, bytefish reworked most of it in python here

might be easier to understand for a python guy ;)

berak gravatar imageberak ( 2014-01-10 04:09:46 -0600 )edit

(1/3)

After reading both the documentation of the facerec python framework and the article at http://www.bytefish.de/blog/fisherfaces/ i have some questions about the accuracy. I understand that the “confidence” before performing a k-fold cross validation represents the euclid distance to the next nearest face found in the db. And i also understand that getting an accuracy on the form of X% is hard having the euclid distance. What i don’t completely understand is, what does it mean an accuracy after performing the k-fold cross validation? Is the accuracy of identifying the person between the faces trained in the set?

Agustin Haller gravatar imageAgustin Haller ( 2014-01-13 00:15:12 -0600 )edit

(2/3)

Suppose that i have a set of photos of one person (X) and a sample image (Y). What i want to do is to compare if the person in the sample image (Y) is the same person of whom i have the set of photos (X). It’s like a face verification use case. In order to use the facerec library for face verification what i’m doing is to consider the single image as another sample, so i have:

|-- s01 | |-- 01.jpg | |-- 02.jpg | |-- 03.jpg | [...] |-- s02 | |-- 01.jpg

Where s01 is the person whom i have the set of photos (X), and s02 represents the sample image (Y).

Agustin Haller gravatar imageAgustin Haller ( 2014-01-13 00:15:59 -0600 )edit

(3/3)

This way i can then perform a k-fold cross validation or another validation. My second question here is, which do you think is the best classifier to use in my use case of “face verification”? Also, do you think the way i’m doing it using two samples make’s sense? If you think there’s a better way of doing what i need, please let me know, i would really appreciate it. Thank you very much!

Agustin Haller gravatar imageAgustin Haller ( 2014-01-13 00:17:25 -0600 )edit

sorry, i did not understand your X Y setup above.

the 'accuracy' is probably determined by something like : false_predictions / num_predictions. so, not related to the distance.

berak gravatar imageberak ( 2014-01-13 05:38:57 -0600 )edit