Ask Your Question

Revision history [back]

Here are tutorials, that should be really easy to adapt to Python:

  • http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html

Most of your questions are answered in these tutorials. Thanks for letting me know about the test.png, it wasn't meant to be written (I've removed it from the sample now).

Here are tutorials, that should be really easy to adapt to Python:

  • http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html

Most of your questions are answered in these tutorials. Thanks First of all thanks for letting me know about the test.png, it wasn't meant to be written commited to the OpenCV repository (I've removed it from the sample now).

Regarding your questions:

Yes, the facerec_demo.py trains an Eigenface Recognizer on a given database:

model.train(np.asarray(X), np.asarray(y))

Then in order to shows how the prediction is done, I simply predict the first image of the given dataset:

[p_label, p_confidence] = model.predict(np.asarray(X[0]))

Since the image is in your training dataset already, the features extracted from the samples are exactely the same as the template in your learned model (hence a distance of 0.00). Why did I do this? In the C++ demo I trained the FaceRecognizer with all images but the last and predicted on that last image. A lot people were confused by this, so for this demo I thought it would be simpler to just predict on a sample in the training dataset. Now it looks like people are even more confused with that. The conclusion is I shouldn't show how to do a prediction at all.

Here are tutorials, that should be really easy to adapt to Python:

  • http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html

Most of your questions are answered in these tutorials. First of all thanks for letting me know about the test.png, it wasn't meant to be commited to the OpenCV repository (I've removed it from the sample now).

Regarding your questions:

Yes, the facerec_demo.py trains an Eigenface Recognizer on a given database:

model.train(np.asarray(X), np.asarray(y))

Then in order to shows how the prediction is done, I simply predict the first image of the given dataset:

[p_label, p_confidence] = model.predict(np.asarray(X[0]))

Since the image is in your training dataset already, the features extracted from the samples are exactely the same as the template in your learned model (hence model. That's why you predict 0 (correctly) and you have a distance of 0.00). 0.00 between them. Why did I do this? it this way? In the C++ demo I trained the FaceRecognizer with all images but the last and predicted on that last image. A lot people were confused by this, so for this demo I thought it would be simpler to just predict on a sample in the training dataset. Now it looks like people are even more confused with that. The conclusion is I shouldn't show how to do a prediction at all.

that.