detectMultiScale(...) internal principle ?

asked 2019-12-18 13:12:46 -0600

chnbr gravatar image

Hi,

I am using detectMultiScale on src image sizes around 960x540 with a LBP classifier trained from 20x20 images. It works fine, but I need to understand what detectMulitScale exactly does, because I plan to implement it in metal. I have understood from reading the C++ code that the source image is scaled with several scales based on scaleFactor and size() parameters. This is the most outer loop of detectMultiScale. What I did not get is how the inner loops are organised.

The LBP feature is computed always on a sub rectangle of 20x20 (in my case). Now I wonder how the 20x20 rectangle is shifted over the source image. Is this really done for every source pixel ? This would result in (960-20)*(540-20) evaluations of sub rectangles ? I don't think it is done this way...

Can anybody shed light on how this is done ? Thanks

edit retag flag offensive close merge delete