Ask Your Question

LBP train_cascade process of training

asked 2016-09-05 17:10:54 -0500

zvone gravatar image

I am trying to understand what is train_cascade ( using LBP) actually doing with samples. If I have, for example, 1000 positives and 1000 negatives, it calculates LBP histogram for each, right? And what does it do with all of these histograms? From which one does it choose features to make weak learners?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2016-09-09 06:44:54 -0500

Ow wait, you are mixing two things up.

  • On the one hand you have LBP histograms, used for recognition for example.
  • On the other hand you have LBP features, that are fed to AdaBoost, a boosting process to create a cascade of weak classifiers (binary stump trees by default)

If you want a step by step explanation of the process, I suggest you to grab Chapter 5 of OpenCV 3 Blueprints! It goes through the process step by step.

edit flag offensive delete link more


I got the book. Thanks for recommendation. Still don't understand from which of these 1000 positives are LBP features extracted from?

zvone gravatar imagezvone ( 2016-09-10 05:57:53 -0500 )edit

during training all possible LBP features from each training sample are calculated. Since they all have the same dimensions, or are rescaled to that,they have the exact same feature locations. This is enforced by a feature grid. Then the boosting decides which of these LBP grid features are descrimintatitve enough to seperate the negatives from the positives and makes a model out of them. Now at detection time, only those features selected are calculated on each given sample. Does that clarify it further?

StevenPuttemans gravatar imageStevenPuttemans ( 2016-09-12 03:53:47 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower


Asked: 2016-09-05 17:10:54 -0500

Seen: 64 times

Last updated: Sep 09 '16