facerec_demo.py confidence issue [closed]
Hi, i'm trying to run the facerec_demo.py example (from here: https://github.com/Itseez/opencv/blob/2.4/samples/python2/facerec_demo.py), but im getting always
Predicted label = 0 (confidence=0.00)
Can anyone help me understand why i'm getting this? I hope to get a real predicted label and confidence. Thank you very much!
you read the comment in line 126 ?
the demo is testing against the 1st file from the trainset ( a known one ) , so that's the expected outcome.
in real life, you would train with known faces and test with unknown ones.
Thanks!, how do I interpret the confidence? If the person is in the trained model then the confidence is near 0, and if not is higher (for example 2567.02). What i mean is how can i translate this to a X% of confidence? Do i explain myself? Thank you again!
'confidence' might be a misnomer here. it's actually the euclid. distance to the next nearest face found in the db. ( that again might explain the 0.0 value in the demo ).
one use of it is to determine a heuristic threshold value, - if e.g. most of your false predictions lie over a certain value here, you'd feed that into the threshold value in the constructor, and they will get culled off ( label = -1 ) in the next runs.
unfortunately, to get it into a [0..1] range (for a more real 'confidence') is a bit difficult. it's probably like 1-distance/(numeigenvecs255) in the eigenfaces case, or 1-distance/(numpatches*2 * 255) in the lbph case, i'm just making that up here, to show you that all facerecognizer classes have different feature-spaces (which makes this task complicated).
if you're interested, bytefish reworked most of it in python here
might be easier to understand for a python guy ;)
(1/3)
After reading both the documentation of the facerec python framework and the article at http://www.bytefish.de/blog/fisherfaces/ i have some questions about the accuracy. I understand that the “confidence” before performing a k-fold cross validation represents the euclid distance to the next nearest face found in the db. And i also understand that getting an accuracy on the form of X% is hard having the euclid distance. What i don’t completely understand is, what does it mean an accuracy after performing the k-fold cross validation? Is the accuracy of identifying the person between the faces trained in the set?
(2/3)
Suppose that i have a set of photos of one person (X) and a sample image (Y). What i want to do is to compare if the person in the sample image (Y) is the same person of whom i have the set of photos (X). It’s like a face verification use case. In order to use the facerec library for face verification what i’m doing is to consider the single image as another sample, so i have:
|-- s01 | |-- 01.jpg | |-- 02.jpg | |-- 03.jpg | [...] |-- s02 | |-- 01.jpg
Where s01 is the person whom i have the set of photos (X), and s02 represents the sample image (Y).
(3/3)
This way i can then perform a k-fold cross validation or another validation. My second question here is, which do you think is the best classifier to use in my use case of “face verification”? Also, do you think the way i’m doing it using two samples make’s sense? If you think there’s a better way of doing what i need, please let me know, i would really appreciate it. Thank you very much!
sorry, i did not understand your X Y setup above.
the 'accuracy' is probably determined by something like : false_predictions / num_predictions. so, not related to the distance.