Ask Your Question

Revision history [back]

How large should the dataset be when using cv2 recognizers (FisherFace, LBPH)? 10 images? 20? 100?

Q1 - Should the people be facing the camera or can I use images of them looking slightly to the sides/ slightly up and down?

Q2 - What is the minimum recommended dataset size?

Q3 - What is the maximum recommended dataset size?

Q4 - When should I worry about over fitting?

Q5 - All faces found during facial detection are resized to 350x350 whether it's during dataset preparation or during actual recognition is this right?

I'm using around 7 - 15 images for each person, most are 640x420 that i captured with my webcam, some taken from social media. First LBP or HAAR detector is used to find and grab the faces and store them as 350x350 greyscale images, those images are then used as the dataset when you train the recognizer (the dataset labels are the names of the folders holding each person's images).

the detectors are then fed a live stream of frames and the faces are found, turned into 350x350 greyscale images and fed into the recognizer, unfortunately the recognizers are usually pretty confused, mostly they return person 1 (who has 30 images) for most faces.

Any advice / answers are welcome.