Improving face recognition accuracy
I'm working on a Face Recognition program using Opencv 3.1
with Python 3
on Linux
and I'm trying to increase recognition accuracy as much as I can.
My issue: The confidence values between Person A (myself) and Person B (a friend) are a bit too close. There is a "fair" amount of difference, but not enough to set a threshold without getting false positives/negatives. I wrote a script to recognize Person A over a set of images for Person B and calculate the average confidence so I could see how much they differ by, and I noticed that as the face size in Step 3 of Preprocessing (see below) increased, the difference decreased.
My expectation was that by increasing the face size, there would be more detail and thus the difference would increase. Detected face sizes in this case were roughly 1500x1500
.
My question: How can I improve face recognition accuracy?
Below is some information about my project. Thanks.
Files used:
- OpenCV's Haar Cascade (
haarcascade_frontalface_default.xml
) with ascaleFactor
of1.1
andminNeighbors
of10
for detecting faces. - Local Binary Patterns Histograms algorithm (
createLBPHFaceRecognizer
) for recognizing faces.
Image information:
- Each
4928x3264
- Same lighting conditions
- Different facial expressions
- Different angles (heads tilting / facing different directions)
Preprocessing Steps:
- Cropping the face out of the whole image
- Converting it to grayscale
- Resizing it to a "standard" size
- Histogram Equalization to smooth out lighting differences
- Applying a Bilateral Filter to smooth out small details
Training Steps:
- Preprocess raw images for a given person
- Train recognizer using preprocessed faces
- Save trained recognizer model to a file
Recognition Steps:
- Load recognizer model from file
- Take in image from either file or webcam
- Detect face
- Preprocess face (see above)
- Attempt recognition
"Different angles (heads tilting / facing different directions)" -- try to avoid that. in fact, most face reco systems try to normalize the face to straightforward pose, and align them, so e.g. eyes are at a fixed position.
how many images per person do you use ? try like 20+ each.
Like @berak already said, adding these large images is an overkill. You do not want to model local pixel deformities which can be caused due to ligting differences for example. You want to model general features of that person.
@berak I've been using a minimum of 20 images per person, and I choose such a huge resolution for the raw images so that I had a lot of detail to start with. I forgot to mention that I was trying different sizes for preprocessing. I tried 500x500, 1000x1000, then of course 1500x1500 (basically not resizing) and that's when I noticed the trend in confidence values.
i can only guess, but more sparse histograms (from smaller images) might work better with lbph, and more dense histograms might just "saturate to uniformity".