Improving face recognition accuracy

asked 2016-08-29 15:05:00 -0600

tyler-cromwell gravatar image

I'm working on a Face Recognition program using Opencv 3.1 with Python 3 on Linux and I'm trying to increase recognition accuracy as much as I can.

My issue: The confidence values between Person A (myself) and Person B (a friend) are a bit too close. There is a "fair" amount of difference, but not enough to set a threshold without getting false positives/negatives. I wrote a script to recognize Person A over a set of images for Person B and calculate the average confidence so I could see how much they differ by, and I noticed that as the face size in Step 3 of Preprocessing (see below) increased, the difference decreased. My expectation was that by increasing the face size, there would be more detail and thus the difference would increase. Detected face sizes in this case were roughly 1500x1500.

My question: How can I improve face recognition accuracy?

Below is some information about my project. Thanks.


Files used:

  • OpenCV's Haar Cascade (haarcascade_frontalface_default.xml) with a scaleFactor of 1.1 and minNeighbors of 10 for detecting faces.
  • Local Binary Patterns Histograms algorithm (createLBPHFaceRecognizer) for recognizing faces.

Image information:

  • Each 4928x3264
  • Same lighting conditions
  • Different facial expressions
  • Different angles (heads tilting / facing different directions)

Preprocessing Steps:

  1. Cropping the face out of the whole image
  2. Converting it to grayscale
  3. Resizing it to a "standard" size
  4. Histogram Equalization to smooth out lighting differences
  5. Applying a Bilateral Filter to smooth out small details

Training Steps:

  1. Preprocess raw images for a given person
  2. Train recognizer using preprocessed faces
  3. Save trained recognizer model to a file

Recognition Steps:

  1. Load recognizer model from file
  2. Take in image from either file or webcam
  3. Detect face
  4. Preprocess face (see above)
  5. Attempt recognition
edit retag flag offensive close merge delete

Comments

1
  • 1500x1500 -- that's huge ! usually, 100x100 should do.
  • "Different angles (heads tilting / facing different directions)" -- try to avoid that. in fact, most face reco systems try to normalize the face to straightforward pose, and align them, so e.g. eyes are at a fixed position.

  • how many images per person do you use ? try like 20+ each.

berak gravatar imageberak ( 2016-08-30 01:35:58 -0600 )edit

Like @berak already said, adding these large images is an overkill. You do not want to model local pixel deformities which can be caused due to ligting differences for example. You want to model general features of that person.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-08-30 04:05:42 -0600 )edit

@berak I've been using a minimum of 20 images per person, and I choose such a huge resolution for the raw images so that I had a lot of detail to start with. I forgot to mention that I was trying different sizes for preprocessing. I tried 500x500, 1000x1000, then of course 1500x1500 (basically not resizing) and that's when I noticed the trend in confidence values.

tyler-cromwell gravatar imagetyler-cromwell ( 2016-08-30 09:58:20 -0600 )edit
1

i can only guess, but more sparse histograms (from smaller images) might work better with lbph, and more dense histograms might just "saturate to uniformity".

berak gravatar imageberak ( 2016-08-31 04:36:16 -0600 )edit