Ask Your Question
1

Need of Eye detection in Face Recognition?

asked 2013-02-27 20:12:12 -0600

this post is marked as community wiki

This post is a wiki. Anyone with karma >50 is welcome to improve it.

Helllo All,

I am trying to make Face Recognition work on my Debian Linux. I have done some modification on top of the code developed by Shervin Emami.
(Shervin's code : https://github.com/MasteringOpenCV/code/tree/master/Chapter8_FaceRecognition)
I used LBPH for face recognition as recommended by Berak.
Now, coming to my problems.
(1). It seems that Shervin's approach relies quite a lot on Eye detection so that captured face can be aligned properly. Is this the only way for face alignment? I observed that eye detection takes more time especially when someone is wearing glasses.
(2). Do haarcascade_lefteye_2splits.xml/haarcascade_righteye_2splits.xml detect eyes with glasses?


OpenCV data folder contains several classifiers. Are there any more classifiers for better face/eye detection?


One more thing, Android Face Unlock and Key Lemon's Face Recognition are very fast in terms of face recognition while they don't need more training data.
Any idea which Algorithm is used there?

Thanks,
Soaptechie

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
4

answered 2013-02-28 02:36:22 -0600

this post is marked as community wiki

This post is a wiki. Anyone with karma >50 is welcome to improve it.

There is a difference between detection and actual recognition, one you should clearly make. The detection is done by LBPH and probably the boosted cascade of weak classifiers approach by Viola & Jones, which is in fact the most common used face detector.

Once you have detected the face, recognition can be done in many ways, by using the detected region of interest. Some possible and good techniques are : example based matching, fisherfaces, eigenfaces, ...

The eye detection is needed to allign your face so that the chance of recognition is higher for your region of interest. However, the most recognition algorithms, when trained with enough examples of faces do not need an alignment, since the recognition for each person is trained by using training samples containing several small rotations of the face.

So you could remove this feature to detect and recognize faces if in your application, the faces appear mostly upright.

Someone wearing glasses doesn't affect the detection of the face, since enough other face features can be detected. However, for detecting eye regions, glasses can introduce reflections and distortions which lead to more difficult find of specific eye features. Its also possible that the trained classifier did not have enough glasses wearing examples. If you want better results, the only way is to train your own detector and inner classifier.

So concrete - face detection model - does detect faces with glasses - eye models - do it sometimes, but not exhaustively trained for that purpose, so not best choice to go with here

About the different classifier models, it is actually the game of trial and error. Just try the different models for your application and try to find out which works best for you.

Algorithms used in those applications, if they are recognition, do require you to supply some test data of a person. Else the algorithm doesnt have any data to create a unique identifier for a person. However, when supplied with this data, if the identifier is unique enough, it will indeed not ask for more data during the use. Remember that these techniques however are highly influenced by lighting and other distortions.

Just my 2 cents!

edit flag offensive delete link more

Comments

Thank you Steven.

Your 2 cents were really helpful !

Removed eye detection part from my application code and to compensate face alignment, now using more faces per person (50 faces per person) for training data to LBPH Face Recognizer. Getting better results compared to my previous attempts.

I appreciate your help.

Soaptechie gravatar imageSoaptechie ( 2013-02-28 20:24:09 -0600 )edit

No problem, just accept answers if a good one pops up :) And on the other hand, it helps me to think about my own projects by helping others around.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-03-01 01:59:24 -0600 )edit
1

answered 2013-03-04 11:27:27 -0600

this post is marked as community wiki

This post is a wiki. Anyone with karma >50 is welcome to improve it.

The eye detection by Haar Cascades is not nearly as precise enough as needed for proper image alignment. If you run some tests, you'll soon notice it generates way too many false positive/false negative predictions to be useful in unconstrained scenarios. Contrary to Steven Puttemans remarks, all face recognition algorithms I know of greatly improve with a proper image alignment. This is simply because most of the face recognition algorithms I know of do rely on non rotational-invariant features, hence they can only compensate slight misalignment. I've talked to Tal Hassner, who did research on unconstrained face recognition and here is what he said:

We have found that alignment of faces has a dramatic effect on face recognition performance (see our ACCV and BMVC papers) So much so, that we released our own aligned version of the LFW data set, which alone pushes accuracy up about 8% using existing methods (you can find the set here: http://www.openu.ac.il/home/hassner/data/lfwa/).

That said, head pose estimation and image alignment is a tough problem and it is not solved entirely. I guess you can find commercial solutions, but I can neither recommend one nor comment on their performance. Recent advances in this research area led to very accurate feature detectors (although slow):

A probably less accurate, but faster approach is given in:

If anyone can add feature detectors or relevant publications to the list, please feel free to post them!

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-02-27 20:12:12 -0600

Seen: 3,786 times

Last updated: Mar 04 '13