I know the gist of how detectMultiScale in OpenCV works i.e. you have an image and a detection window; the image is scanned by a detection window and particular feature calculations are done on the pixels in the window at that particular instance to determine if a detection occurred or not.
However, from OpenCV's documentation it would seem that the manner in which the scaling (to detect objects of different sizes) takes place, differs whether or not you are using a
cascade classifier; code can be found http://code.opencv.org/projects/opencv/repository/revisions/master/entry/modules/objdetect/src/cascadedetect.cpp#L1157
or if you are using the HOGDescriptor; code can be found http://code.opencv.org/projects/opencv/repository/revisions/master/entry/modules/objdetect/src/hog.cpp#L1309
The documentation of OpenCV states that the cascade classfier detectMultiScale uses a scaleFactor to REDUCE THE IMAGE SIZE in which the detection takes place until it is smaller than the detection window, while the HOGDetector detectMultiScale has a scale factor (scale0) which INCREASES THE DETECTION WINDOW until it is the size of the image in which detections are checked.
Why is there a difference between the two? Is one implementation better than the other?
Currently I have trained both a cascade classifier with HOG features and a SVM and HOG features (HOGDescriptor) in OpenCV 2.4.8.
Thank you in advance