Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

python FaceRecognizer questions

Philipp, I truly hope I haven't been overly annoying with all these questions, and sincerely apologize if I have. I'm working really hard to understand all this so I can implement it in my classroom next month. I have your facerec_demo.py working now. I have read this: (http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948) I still have a few questions if I may. If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with." (q1) Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer? (q2) would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages? (q3) once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest math as a predicted output?

click to hide/show revision 2
spelling errors not allowed from a HS teacher

python FaceRecognizer questions

Philipp, I truly hope I haven't been overly annoying with all these questions, and sincerely apologize if I have. I'm working really hard to understand all this so I can implement it in my classroom next month. I have your facerec_demo.py working now. I have read this: (http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948) I still have a few questions if I may. If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with." (q1) Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer? (q2) would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages? (q3) once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest math match as a predicted output?

python FaceRecognizer questions

Philipp, I truly hope I haven't been overly annoying with all these questions, and sincerely apologize if I have. I'm working really hard to understand all this so I can implement it in my classroom next month. I have your facerec_demo.py working now. I have read this: (http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948)

  • http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948

I still have a few questions if I may.

  • If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with." (q1)

  • Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer? (q2) would

  • Would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages? (q3) once

  • Once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest match as a predicted output?

click to hide/show revision 4
retagged

python FaceRecognizer questions

I have read this:

  • http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948

I still have a few questions if I may.

  • If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with."

  • Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer?

  • Would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages?

  • Once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest match as a predicted output?