Tips for training cascade classifier
I am currently trying to train a cascade classifier with custom training images, which currently consist of around 70 positives and 600 negatives.
When I run the training (using a version of OpenCV built with TBB) with a model resolution of 20x60px, an acceptance ratio threshold of .00003, and a feature type set to LBP, it takes around half an hour. When I up the resolution to 100x300, it takes 24 hours or more. This is on a 12-core Intel i7 processor ("Intel(R) Core(TM) i7-4930K CPU @ 3.40GHz, 3401 Mhz, 6 Core(s), 12 Logical Processor(s)") with 16 GB of RAM, so the fact that it takes so long is a bit confusing. The lower-resolution version gets a lot of false positives and I think that it is simply too low-fidelity for my target data. The higher-resolution version is better, but it still has problems and the insane training times are not viable for my purposes.
Is there something that I should be doing differently to train my classifier so that it doesn't take so long? I feel like I must be doing something wrong if it is taking this long.
Separately, once I have a trained classifier, I generally am getting lots of false positives while still not always detecting my target. This makes me think that my training data isn't good enough to properly identify my target.
My positive images are all cropped to similar aspect ratios so that just the target and a small amount of background are showing. The negatives are just full-size pictures of the environment with no cropping or segmentation. Is this what I should be doing? I am unfamiliar with how this classifier works internally, but I imagine that if it does some sort of comparison 1:1 of positives and negatives I might actually want to crop the negatives: even if the whole negative image does not look like my target object, a smaller section might. Is this guess correct? If not, what should I be doing to make my training results better? Is it simply a matter of getting more positives/negatives?
I was also thinking that the accuracy issues might be helped if I applied some sort of blur to both my training data and my input, but I am not sure if that is the right thing to do.
Finally, as I understand it, the classifier only operates on black-and-white versions of the input images. Is there something I can do to make it so that colors that are similar in grayscale not be confused?
Edit: My current sample creation / training command is included below. I have this wrapped as a PowerShell script in my project, but here I've inserted the values as they are evaluated. Note that I do not pass any buffer sizes. Also, I am currently running off of custom debug builds. I am aware that release builds will be faster and plan to do that the next time I ...