Ask Your Question
1

opencv_traincascade Negative samples training method

asked 2013-10-25 01:36:29 -0600

r.a. gravatar image

updated 2013-10-25 04:05:09 -0600

Hi, I'm using successfully opencv traincascade module with LBP.

Still I'm not sure if my training is "logical"

I have two questions:

  1. Is nneg the number of negative samples, is simply the number of training Images or is the number of training samples taken from the images?(So there could be for example 1000 neg images but 2000 samples taken from those images as neagtives)
  2. FA Rate - Is it per FA/Image?So In each stage we scan the negative images and make sure that no one of them is positive(according to the desired FA)? I can ask it another way ,If the the FA is 0.5 and we have 14 stages so we have 0.5^14 error rate.Does this mean that after that the learning process is ended 'so on those negative Images the error will be 0.5^14 per frame?

Actually I don't understand the training process inside the function what happens in the stage when I asked for 0.995 Det rate and 0.5 FA rate.

The positive is simple and clear. But what about the neg samples? After choosing features do we run after on the neg scan them(block by block rescale and all this stuff)with the current classifier and check that we fulfilled the stage error?

I know it's a long question', but I hope for simple answer.

edit retag flag offensive close merge delete

3 answers

Sort by » oldest newest most voted
4

answered 2013-10-25 04:11:17 -0600

nneg is the amount of negative windows, that are grabbed from your set of negative images. It are not the images itself, but the negative windows, equalling the size of your model window that are randomly grabbed from your negatives dataset. So yes, you can actually use like 3 images of 1000x1000 pixels and deduce over 1000 negative windows of 15x15 pixels for example.

Your second remark is correct. We want the general generated error on your detections to drop under the false acceptance rate to the power the amount of stages. If this happens training can be stopped early (extra stopping criteria) because your model reaches the required quality with less stages.

0.995 Det rate means that you want to 99.5% of all objects that are actually in your positive dataset by the trained classifier at each stage (existing of a combination of weak classifiers of each single stage).

The negatives are used to check if negatives get wrongly classified! And thus generating error :)

Hope this helps out!

edit flag offensive delete link more

Comments

Hi, thanks for the detailed answer. Just to make sure, 1.I understand that nneg is the number of negative windows and not Images! 2.So the FA is on those windows and not The Neagtive Images set!(That's why after finished to create a classifier I get many FA on those Train negative Images,as happened to me) 3.Is there a way to make sure that a certain Image will be added as a negative window for example if the size of the negative Image is like the size of the scanning window?

r.a. gravatar imager.a. ( 2013-10-26 12:12:02 -0600 )edit
  1. Yes 2. Yes 3. Yes you cut out your negatives beforehand, making sure the window size is correct for training and you add what they call hard negatives!
StevenPuttemans gravatar imageStevenPuttemans ( 2013-10-28 07:20:34 -0600 )edit

O.k. thanks for the help,I appreciate that.

r.a. gravatar imager.a. ( 2013-10-28 07:58:54 -0600 )edit
3

answered 2013-10-31 13:09:45 -0600

r.a. gravatar image

updated 2013-10-31 13:35:14 -0600

I'll try to summarize this topic to help others.

I have reviewed the code and thanks to StevenPuttemans help I think I finally got it.

1.The cascade - adaboost classifier has stages.

In each stage we have numPos and numNeg samples.

2.The numPos samples are the number of positive samples that are used as training samples in the i-th stage.They aren't the total number of samples in the vec file.

You may choose them to be for example 0.95*(number of samples in vec file).

3.The numNeg samples are the number of samples used in training of the ith stage. They are picked randomly (cropped and scaled from the negative images)

They could be more or less than the total number of negative images that you have.

For example: Suppose you have 1000 negative Images ,numNeg may be 5000 samples.

The samples that are picked are only those that were classified mistakenly as positive by the i-1 stage.

This is a good idea since we are sure that only more difficult negatives are going to next stage.

4.Hard negatives are those FA that you get after running your final classifier on a set of negative images or video.

You may add negative that are called hard negatives by just cropping and resizing your negatives to the positive sample width and height.

5.The acceptance ratio of the negatives is the number of negatives classified as positive divided by those which classified correctly as negatives in each stage. For example 1/1000 means that randomly picking 1000 windows of negatives from the negative images one of them is classified as positive.

As explained in 3 just those that classified as positive by the i-1 predictor will be the negatives of the i stage.

6.THE FA rate error is only per window ! I believe it was better if we have also a FA criteria on Image level and not just window level FA/Image(Of course there is a correlation between them,but still you want to know your error rate on image level) Currently just make sure that you have very low error on window level and hopefully you are good also in Image level.

I hope this will clarify some things to others as well.

edit flag offensive delete link more

Comments

but when i have 1000 images and a maxFalseAlarmRate= 0.5 after 10 stages there is just 0.97neg images left? how does this work???

DerrickB gravatar imageDerrickB ( 2016-12-18 03:51:53 -0600 )edit
0

answered 2014-07-22 02:10:33 -0600

xonobo gravatar image

Hello, this track is really useful to understand what happens while training a cascaded clasifier. İ have one more question about the progress of the training.

For each stage İ guess N number of weak classifiers are trained and the columns N | HR | FA just logs the current performance of the stage classifier. In my case i see hr and fa equal to 1 upto n=4. I expect at each iteration hr increases and fa drops. Can anyone explain this or is something wrong with my training.

edit flag offensive delete link more

Comments

xonobo, if you have a question, please ask one .

please do not hijack previous threads, as it gets totally cluttered then.

berak gravatar imageberak ( 2014-07-22 03:01:01 -0600 )edit

To respond to your question, nevertheless @berak is right, it is so that N = 1 means the first iteration of your weak classifier training in adaboost. It selects a single feature that can make sure the hit ratio is as desired. Secondly the FA rate is calculated. If that is not enough, standard lower than 0.5, another feature is added to reduce the FA rate but trying to keep the hit ration stable. This goes until the weak classifier has the desired hit rate and fa rate for each stage. HR could drop after an iteration, thats the downside of adaBoost.

StevenPuttemans gravatar imageStevenPuttemans ( 2014-07-22 04:27:11 -0600 )edit

Question Tools

3 followers

Stats

Asked: 2013-10-25 01:36:29 -0600

Seen: 6,571 times

Last updated: Jul 22 '14