Ask Your Question
0

Bad efficiency in LBP

asked 2013-03-10 12:12:41 -0600

Preeti gravatar image

i am working on a code that uses face detection first. the detected face is stored and then face recognition, gender and age detection is done on the image. its basically the same code (using LBP), i just train it thrice with three different databases for training. i initially used the at&t face database. eventually i appended a few of my pictures in the database too. now, the real problem is with the efficiency. theoretically LBP promises 96+ % efficiency but i am not even close to that. where could i be going wrong?? please help!! i need to submit the project this week.

edit retag flag offensive close merge delete

Comments

preprocessing like equalizeHist, cropping seems to be crucial.

if you get too many false positives, there's a threshold param, that you can set via:

reco.set("threshold", 100.0); // any distance( minDist ) above that will get discarded as 'false'

(unfortunately, that won't help you with false negatives)

also, unlike the fisher and eigen methods(which need a LOT of other faces to build an optimal pca ) you won't gain much by throwing more databases at lbp

berak gravatar imageberak ( 2013-03-10 16:12:38 -0600 )edit

you say: "eventually i appended a few of my pictures in the database too".

how many per person ? maybe you just did not take enough ? 10-20 seems to be good.

berak gravatar imageberak ( 2013-03-11 14:13:33 -0600 )edit

i did append 10 images per new person i added to the database

Preeti gravatar imagePreeti ( 2013-03-14 05:00:40 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-03-11 11:04:58 -0600

One of the main problems with training a good classifier is that people take to less negative images. These images are used to model background clutter and variation, so your negative image set should be extremely large if you want to get a very robust classifier.

I managed to get a 96% detection rate by applying 2500 positives and 100.0000 negatives. I guess this makes my point.

edit flag offensive delete link more

Comments

I think he is talking about face recognition and associated tasks, while you are referring to learning LBP cascades. In that sense you are totally right, these models need data. Talking about face recognition, you can expect good recognition rates on simple datasets (like the mentioned AT&T one). But these are fairly simple models, that need preprocessing for faces in the wild. Talking about things like age estimation, that is a tough problem that needs a lot of brain to put in.

Philipp Wagner gravatar imagePhilipp Wagner ( 2013-03-11 11:46:12 -0600 )edit

Ow agreed, I misunderstood the topic :)

StevenPuttemans gravatar imageStevenPuttemans ( 2013-03-12 03:21:23 -0600 )edit

Question Tools

Stats

Asked: 2013-03-10 12:12:41 -0600

Seen: 427 times

Last updated: Mar 11 '13