# python FaceRecognizer questions

I still have a few questions if I may.

• If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with."

• Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer?

• Would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages?

• Once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest match as a predicted output?

edit retag close merge delete

Sort by » oldest newest most voted

Here are tutorials, that should be really easy to adapt to Python:

Most of your questions are answered in these tutorials. First of all thanks for letting me know about the test.png, it wasn't meant to be commited to the OpenCV repository (I've removed it from the sample now).

Yes, the facerec_demo.py trains an Eigenface Recognizer on a given database:

model.train(np.asarray(X), np.asarray(y))


Then in order to shows how the prediction is done, I simply predict the first image of the given dataset:

[p_label, p_confidence] = model.predict(np.asarray(X[0]))


Since the image is in your training dataset already, the features extracted from the samples are exactely the same as the template in your learned model. That's why you predict 0 (correctly) and you have a distance of 0.00 between them. Why did I do it this way? In the C++ demo I trained the FaceRecognizer with all images but the last and predicted on that last image. A lot people were confused by this, so for this demo I thought it would be simpler to just predict on a sample in the training dataset. Now it looks like people are even more confused with that.

more

Not doing too bad. I can't quite get the save function to work yet, tried cv2.model.save("eigenModel.xml") but haven't really put any time in it yet. I seem to be coming along pretty well on the rest though. Going to make a new database tomorrow and train on that. We'll see how far I get. Planning on getting about twenty cropped and grayscaled images of 4 family members, using their names as the directory labels (y), with just integers as the jpg labels (X). From reading the comments, tutorials and code, I don't think I even need to go near KNN, do I? I can just run an img through the trained model and get a prediction, no? Thanks for the guidance Philipp.

( 2012-08-09 20:39:09 -0500 )edit

the save command works now, it's just model.save("eigenModel.xml")

• I can't seem to figure out what I'm doing wrong with the database though. I took twenty pics each of 3 family members, the pics are facedetected, cropped at the facedetect box, grayscaled and saved with .pgm extension but I keep getting an array error when I add them as files s41, s42, and s43 to the att orl_faces database. I'm thinking its the pgm conversion, off to research that now.
( 2012-08-10 11:40:24 -0500 )edit

The link is no longer working...

( 2016-09-13 19:46:57 -0500 )edit

Official site

GitHub

Wiki

Documentation