Why is there a difference between OpenCV's scale change implementation of detectMultiScale between the cascade classifier and HOGDescriptor? [closed]

asked 2015-08-21 09:11:13 -0500

I know the gist of how detectMultiScale in OpenCV works i.e. you have an image and a detection window; the image is scanned by a detection window and particular feature calculations are done on the pixels in the window at that particular instance to determine if a detection occurred or not.

However, from OpenCV's documentation it would seem that the manner in which the scaling (to detect objects of different sizes) takes place, differs whether or not you are using a

cascade classifier; code can be found http://code.opencv.org/projects/openc...

or if you are using the HOGDescriptor; code can be found http://code.opencv.org/projects/openc...

The documentation of OpenCV states that the cascade classfier detectMultiScale uses a scaleFactor to REDUCE THE IMAGE SIZE in which the detection takes place until it is smaller than the detection window, while the HOGDetector detectMultiScale has a scale factor (scale0) which INCREASES THE DETECTION WINDOW until it is the size of the image in which detections are checked.

Why is there a difference between the two? Is one implementation better than the other?

Currently I have trained both a cascade classifier with HOG features and a SVM and HOG features (HOGDescriptor) in OpenCV 2.4.8.

Thank you in advance

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by sturkmen
close date 2020-10-05 09:26:58.673559