Ask Your Question

Revision history [back]

Ok the crutial mistake that you are making is trying to train a model of 90x140 pixels with that size of training data. Assuming that you can calculate +- 150.000 unique haar features of a 24x24 image, and the amount rises exponentially, the number at your range is huge. The algorithm has to store all of these features inside your working memory, which causes it to crash.

Suggestions I make to students are, keep your ratio's correct but use the createsamples utility to resize the images to a smaller size, which still contains more than enough features.

For example in your case, give parameters -w 9 -h 14 or double that to even -w 18 -h 28. The model will do exactly what it is supposed to do and the training will go fine. Also, the model size defines the smallest object you can detect in the image. So training it at a smaller size, enables you to actualle correctly detect smaller instances.

Once you perform detection, you can limit the scale space pyramid by specifying minSize and maxSize parameter of the detection algorithm.