1 | initial version |
As you know, we can image this kind of classifier as a function which assigns a couple of values to every windows it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade). The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ).
What you experienced depends only on the little number of samples used to train the classifier. In such a situation it could happen that just one weak classifier per stage can separate negatives from positives. If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups. Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed). These explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job.
2 | No.2 Revision |
As you know, we can image this kind of classifier as a function which assigns a couple of values to every windows it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade).
cascade).
The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ).
What you experienced depends only on the little number of samples used to train the classifier.
classifier.
In such a situation it could happen that just one weak classifier per stage can separate negatives from positives.
positives.
If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups.
groups.
Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed).
passed).
These explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job.
3 | No.3 Revision |
As you know, we can image this kind of classifier as a function which assigns a couple of values to every windows it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade).
The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ).
What you experienced depends only on the little number of samples used to train the classifier.
In such a situation it could happen that just one weak classifier per stage can separate negatives from positives.
If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups.
Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed).
These explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job.job compared to the ones of model 1 and 2.
4 | No.4 Revision |
As you know, we can image this kind of classifier as a function which assigns a couple of values to every windows window it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade).
The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ).
What you experienced depends only on the little number of samples used to train the classifier.
In such a situation it could happen that just one weak classifier per stage can separate negatives from positives.
If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups.
Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed).
These explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job compared to the ones of model 1 and 2.
5 | No.5 Revision |
As you know, we can image this kind of classifier as a function which assigns a couple of values to every window it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade).
The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ).
What you experienced depends only on the little number of samples used to train the classifier.
In such a situation it could happen that just one weak classifier per stage can separate negatives from positives.
If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups.
Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed).
These This explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job compared to the ones of model 1 and 2.