1 | initial version |
Well what is happening can be explained quite straightforward. Each cascade classifier is trained with a -w
and a -h
parameter, which can be retrieved from the xml model. It defines the dimensions of the object model. For a multiscale detection an image is scaled down from the original size by -scaleFactor
step by step and then the original model size is used to perform detections.
However this means that the model dimensions immediatly define the minimum dimensions an object should have. You can avoid this by initially upscaling the image. BUT this comes with a price, upscaling introduces noise and errors in the image, the more you upscale, the more these can influence the model evaluation. So at a certain point developers decided that the downsides of upscaling did not overcome the benefits. And thus it is left to the user to decide.
I am pretty sure that if you count the number of pixels for the width and height that they will be smaller than the model size for the object not detected.