Ask Your Question

Face recognition with opencv 2.4.x accuracy

asked 2013-01-02 05:30:12 -0600

Dilek Schumacher gravatar image

Hello, i try to implement this tutorial; it's a really great and clear tutorial.But i have really important problem.

I built database for training images.I have my pictures and my friend's pictures in it. Lets say my name=xxx Friends name=yyy When program finds my face,it generally says yyy.(I tried it with lots of people(i didnt add their pictures on db) and keep saying yyy) when i first tried the program,i added my pictures and brad pitts,adriana lima pictures.When webcam captures my face,it said my name.if webcam captures brad pitts face(picture from my phone),it said brad.It looked work but it really doesnt. I really need to make some program when it captures my face,it closes a program, if there is not my face,it opens a program. How can i do that? How can i get it more accurate.Is it because of my pictures?I use python code for "Aligning Face Images".My pictures features are offset_pct=(0.3,0.3), dest_sz=(250,250).Are not they good values? I have 10 pictures for each people. Please tell me the possible problems to recognize me. Also when i try to give threshold value(even the value is 0) it couldn't find anybody.I need to use threshold because the program should work only with my face,other faces shouldn't be allowed.

I really need for some answers,i'm trying on this for a long time.

my system: Win 8, VS2010 premium, Opencv 2.4.2,

edit retag flag offensive close merge delete

4 answers

Sort by » oldest newest most voted

answered 2013-01-02 14:26:03 -0600

updated 2013-01-02 14:28:13 -0600

The first thing I would do is to estimate the actual recognition rate, like shown in this answer:

If the recognition rate turns out to be too low, it's time to preprocess the images. If your images are subject to differences in illumination you could try the approach given in:

  • Tan, X., and Triggs, B. "Enhanced local texture feature sets for face recognition under difficult lighting conditions.". IEEE Transactions on Image Processing 19 (2010), 1635–650. (PDF), (C++ Code), (Python Code)

But what is going to give you the greatest increase in recognition rates is the correct alignment of your image data. The Python script I have given in the mentioned tutorial might be sufficient to manually crop the images, but for an automated system you'll need something more clever. An interesting approach (and code!) is given in:

You'll probably need to make some minor changes to make the code work in a recent OpenCV 2.4, but I think it's a feasible task. Lately another highly interesting approach to head pose estimation was given in:

But I didn't have time to experiment with it yet, so I don't know if you can use it in a real time environment. There's also a cool blog post by Roy from, which is worth reading:

I guess that makes a good start for a research!

edit flag offensive delete link more


you are the best. Really good and clear answer, i will try them as soon as possible. I think i cannot run the first code you said (to estimate the actual recognition rate) because my code is cpp code but i will search more. thank you for your answer.

Dilek Schumacher gravatar imageDilek Schumacher ( 2013-01-03 08:35:22 -0600 )edit

Thanks for understanding, that I can't come up with solutions instantly. This is quite a complex problem, that will need some research. Regarding the Cross Validation, I can also write a C++ version.

Philipp Wagner gravatar imagePhilipp Wagner ( 2013-01-03 14:16:26 -0600 )edit

answered 2013-01-08 06:18:07 -0600

updated 2013-01-08 06:27:08 -0600

You can use this paper also for the correct alignment of your image data. Bolme, D.S. "Average of Synthetic Exact Filters", IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009. PDF - C source code

You can use the position of the eyes to align the faces

edit flag offensive delete link more

answered 2013-01-10 06:19:19 -0600

You can also use a facial landmark detector to align your data. I think this is a good one:

flandmark: Open-source implementation of facial landmark detector Project page - PDF - Source Code

edit flag offensive delete link more


how to align face when we get the keypoint?

caron gravatar imagecaron ( 2013-06-25 10:32:16 -0600 )edit

something like this:

  1. get angle of desviation

    Point2d right_eye = landmarks[RIGHT_EYE_ALIGN]; Point2d left_eye = landmarks[LEFT_EYE_ALIGN];

    Point2d eyeDirection(right_eye.x - left_eye.x, right_eye.y - left_eye.y);

    double angle = atan2(eyeDirection.y, eyeDirection.x) * 180 / 3.141592;

  2. get rotation matrix

    Mat rot_mat = getRotationMatrix2D(left_eye, angle, 1);

  3. warp affine

    Mat tmp; //rotated image result

    warpAffine(src_image, tmp, rot_mat, src_image.size());

albertofernandez gravatar imagealbertofernandez ( 2013-07-01 01:56:47 -0600 )edit

answered 2014-11-20 17:57:52 -0600

Sowmya gravatar image


Even I am working on the same problem. The link that you are using always gives a incorrect label. So I tried using the code from this website This gives 100% accuracy for offline training and testing because it does all preprocessing.

I have one question to ask. How to did you get the name displayed for you and pitt(for ex) in the video? I wanted to know how tagging is done? I am uable to tag people in videos. Can you please share your code?

edit flag offensive delete link more

Question Tools



Asked: 2013-01-02 05:30:12 -0600

Seen: 10,031 times

Last updated: Nov 20 '14