Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Question about FaceRecognizer training

I have a doubt about how a face recognizer works, particularly the training.

I want to recognize a particular person with the predict() method, and I can think of two alternatives to train the face recognizer:

  1. Train the face recognizer with the AT&T faces database (or another anonymous face database) plus some selfies taken from the person I want to recognize.
  2. Train the face recognizer only with the selfies of the person I want to recognize and, if the distance of the predict() is low (what is low?), consider it as a match.

The "problem" is that our face recognition system cannot be trained with the potential set of users to be recognized, so it's impossible to train the face recognizer with our predefined database.

Question about FaceRecognizer training

I have a doubt about how a face recognizer works, particularly the training.

I want to recognize a particular person with the predict() method, and I can think of two alternatives to train the face recognizer:

  1. Train the face recognizer with the AT&T faces database (or another anonymous face database) plus some selfies taken from the person I want to recognize.
  2. Train the face recognizer only with the selfies of the person I want to recognize and, if the distance of the predict() is low (what is low?), consider it as a match.

The "problem" is that our face recognition system cannot be trained with the potential set of users to be recognized, so it's impossible to train the face recognizer with our predefined database.

More background:

I’m working in an Android APP which takes 10 selfies and they are compared it with the user photo of his document ID: https://www.dnielectronico.es/img/anv... That photo is a jpg file read from the NFC interface of the DNIe 3.0.

We call it identity verification: the APP guarantees that the person of the selfies is the person he says it is. As you can see there is not an enrollment process , thus we don’t call it authentication.

@berak I checked your proposal and at a first sight it seems that the MACE algorithm could be a better choice than OpenCV FaceRecognizer. What do you think, the selfies would act as the enrollment images and the photo of the user's document as the query image? By the way, do you know a Java binding for MACE?

Question about FaceRecognizer training

I have a doubt about how a face recognizer works, particularly the training.

I want to recognize a particular person with the predict() method, and I can think of two alternatives to train the face recognizer:

  1. Train the face recognizer with the AT&T faces database (or another anonymous face database) plus some selfies taken from the person I want to recognize.
  2. Train the face recognizer only with the selfies of the person I want to recognize and, if the distance of the predict() is low (what is low?), consider it as a match.

The "problem" is that our face recognition system cannot be trained with the potential set of users to be recognized, so it's impossible to train the face recognizer with our predefined database.

More background:

I’m working in an Android APP which takes 10 selfies and they are compared it with the user photo of his document ID: https://www.dnielectronico.es/img/anv... http://clipset.20minutos.es/nuevo-dni-electronico-3-0-con-nfc-ventajas-y-peligros/ That photo is a jpg file read from the NFC interface of the DNIe 3.0.

We call it identity verification: the APP guarantees that the person of the selfies is the person he says it is. As you can see there is not an enrollment process , thus we don’t call it authentication.

@berak I checked your proposal and at a first sight it seems that the MACE algorithm could be a better choice than OpenCV FaceRecognizer. What do you think, the selfies would act as the enrollment images and the photo of the user's document as the query image? By the way, do you know a Java binding for MACE?