Ask Your Question

I want to implement a FaceRecognizer which just judge the two faces are the same person or not

asked 2013-04-15 12:46:20 -0600

Sayakiss gravatar image

But in the OpenCV, the algorithm exists for that purpose directly.

The currently available algorithms are:

Eigenfaces (see createEigenFaceRecognizer())

Fisherfaces (see createFisherFaceRecognizer())

Local Binary Patterns Histograms (see createLBPHFaceRecognizer())

These algorithms assume we must given a set of images belong to some different people, and judge the given images is similar to which person we given before.

(Because the training data set is small(only one image), so I choose the Local Binary Patterns Histogram)

I may train the algorithm by a image of one person and some meaningless images(images without face), and the result must say the test image is most similar to the only person of our training set and we can just output the confidence of result.

It's just so dirty to adapt the algorithm to my purpose. I just wonder, is there any elegant way to implement it?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2013-04-15 13:29:09 -0600

An elegant way is to take the complete example and remove all overburden of the algorithm. It is not wrong or dirty to adapt the original code in order to get fast working functionality, this is just something that is done alot.

Also, if you have a single person identifier with confidence score created, like lets say for a login screen or so, then create a pull request to add the example. People are always interested in stuff like this :)

Btw, even if you want to recognize a single person, it is always better to use a set of images of that person. For example, when creating a login setup on your laptop, the lightning always changes. Try to incorporate as much variation as possible in your set :)

edit flag offensive delete link more



i think, he just wanted to shortcut the training step ..

berak gravatar imageberak ( 2013-04-15 13:38:03 -0600 )edit

Yeah, I just want to use OpenCV to implement a login screen. But I got some question: Should I make the images to gray-scale first? Should I align the facial images?If I choose LBPH, what's the proper threshold of the confidence to decide its a same person or not?

Sayakiss gravatar imageSayakiss ( 2013-04-15 22:36:17 -0600 )edit

Basically the fisherfaces or eigenfaces request a grayscale input, so yes, converting it and applying an equalizeHist to it, will be needed. About aligning. If you want to do a login procedure, you will need first face detection, for example using the Viola & Jones approach. The result isn't aligned yet. So either you try to implement an automatic alignment or you make sure that your reference dataset of the person is variate enough, :)

StevenPuttemans gravatar imageStevenPuttemans ( 2013-04-16 01:35:26 -0600 )edit

answered 2013-04-15 13:29:05 -0600

berak gravatar image

updated 2013-04-15 15:56:05 -0600

"I may train the algorithm by a image of one person and some meaningless images"

No, don't.

you won't get anywhere this way. lbph(and the others, too) is doing a nearest-neighbour search to look for the closest match in the train-set, so adding meaningless images won't make it better, more likely worse.

maybe you're thinking of neural networks, or boosted learning techniques, which in fact require positive as well as negative images ( and weighting their ratio is one of the keys to success there ), but not so here.

do as you were told before, and feed a couple of images per person into the db/training

edit flag offensive delete link more

Question Tools


Asked: 2013-04-15 12:46:20 -0600

Seen: 934 times

Last updated: Apr 15 '13