Attention! This forum will be made read-only by Dec-20. Please migrate to https://forum.opencv.org. Most of existing active users should've received invitation by e-mail.

# Number of stages or maxFalseAlarmRate?

Hi all,

I'm working on training a set of classifiers with opencv_traincascade to recognize a few handwritten characters.

I started out working with some typical defaults that I've found online: minHitRate 0.99 maxFalseAlarmRate 0.5 numStages 20

I observed that the stages take progressively longer to train--so I wanted to ask: Is it better to cut the maxFalseAlarm rate and run fewer stages? In what situations is it better or worse?

My next run will be relaxing these parameters to see how quickly the classifier can train (and if the performance is acceptable) with: minHitRate 0.97 maxFalseAlarmRate 0.2 numStages 5

I'm going to do what I can to get a quick set of classifiers trained that's good enough for testing, and then see about training a new set with higher accuracy over a longer period of time.

edit retag close merge delete

Sort by » oldest newest most voted

You should fully grasp the concepts of cascade classifiers before making assumptions on those parameters. There is a specific reason on why the default values are set like that.

Cascade classifiers combine weaker fast performing classifiers into one (still fast) stronger classifier. These classifiers are run using a sliding window approach on a multi scale approach. As you can imagine, a single image of 1000x1000 combined with a model of 24x24 pixels (in the case of a face), can have a enormous amount sliding windows that need to be classified.

This leads to the main goal of cascade classifiers, which is two-fold

• Try to remove as much windows from the evaluation process as soon as possible, and thus reduce the processing time for a single image.
• If a window continues down the cascade, try to have as less feature evaluations as possible, but assuring the maximum accuracy.

Keeping all this information in the back of your head we can now see why the default values are chosen.

• A minHitRate is the parameter that ensures us that our positive training data yields at least a decent detection output. We do not want to lower this value to much. For example a value of 0.8 would mean that 20% of our positive object training data can be misclassified, which would be a disaster. Using a rate of 1% misclassification is a common value used in research.
• A maxFalseAlarmRate is used to define how much features need to be added. Actually we want each weak classifier to have a very good hit rate on the positives, and then to allow them to remove negative windows, as fast as possible, but doing better then random guessing. 0.5 means you apply a random guess, better than that means you successfully remove negative windows as negatives very early using only a few feature evaluations, letting other negatives be discarded by the following stages.

Since it is the waterfall principle, each negative has the chance off early rejection, ensuring that not all model features are being evaluated and thus the execution would take much longer.

So basically setting your maxFalseAlarmRate to low will yield larger weak classifiers and thus also MORE features to be evaluated in more windows before making an initial decision on that window. Since this grows exponentially, it seems legit not to increase this too much.

The number of stages is to obtain your overall performance. Because at the end you want to know how well the complete cascade is doing on the negative set (to avoid false positive detections), without backing out on the accuracy of the true positive training samples.

Got it?

more

( 2015-12-02 03:20:42 -0500 )edit

You are welcome, do accept the answer if it suits your needs :)

( 2015-12-02 03:29:36 -0500 )edit

@StevenPuttemans Does it mean that if i decrease the maxFalseAlarmRate (i.e. 0.4) the speed of my classifier is becoming worse but the it has less False Positives?

Does it mean that i should not increase the maxFalseAlarmRate above 0.5 because my weak classifers are not better than random guessing?

( 2016-11-29 05:17:38 -0500 )edit

1) YES and NO. Speed will drop due to more complex classifiers ran on more samples in the earlier stages. But efficiency will not go up if you train with exact the same data, since you do not add extra knowledge. To reduce false positives, you need to add more negative training data.

2) Increasing above 0.5 seems invalid indeed ... why would one desire that ...

( 2016-11-29 07:54:45 -0500 )edit
1

( 2018-06-10 05:42:51 -0500 )edit

Official site

GitHub

Wiki

Documentation