Why does Haar cascade classifier performance change when I crop an image?

asked 2017-04-04 19:33:59 -0500

daveg2 gravatar image

I have an image which contains several (10-20) objects. The objects are typically small (50x10px) compared to the overall image (2500x2000px).

I can run detectMultiScale and get good results on the entire image. However, I know that the objects are contained in a certain area, so I decided to select a sub image and run detectMultiScale in order to reduce processing time.

I was surprised to find that detectMultiScale returned slightly different results when performed on the sub image. In some cases, it missed some of the objects that it could find on the full image.

I'm using a scale factor = 1.05, minNumNeighbours = 3, HaarDetectionType = FindBiggestObject, and identical min/max sizes each time.

Can anyone help me understand why I would be getting different results? Are there any tricks I can apply to make this more consistent?



edit retag flag offensive close merge delete



The cropped image size should be an octave of the original size. The scaling will affect which window sizes the detection occurs at. Perhaps when you cropped the image, the particular scale at which your object is detected is not used and is skipped. Make the scale increments smaller, which will affect performance.

MRDaniel gravatar imageMRDaniel ( 2017-04-05 00:53:48 -0500 )edit

could you provide a sample image

sturkmen gravatar imagesturkmen ( 2017-09-28 19:33:03 -0500 )edit