# How does the parameter scaleFactor in detectMultiScale affect face detection?

I am trying out a slight variation of the example from http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html

    CascadeClassifier faceDetector = new CascadeClassifier("/haarcascade_frontalface_default.xml");
MatOfRect faceDetections = new MatOfRect();
double w = ((double)originalCrop.getWidth());
double h = ((double)originalCrop.getHeight());
faceDetector.detectMultiScale(image, faceDetections, 3, 1,
Objdetect.CASCADE_DO_CANNY_PRUNING , new Size(w/16, h/16), new Size(w/2, h/2));


From the API: scaleFactor – Parameter specifying how much the image size is reduced at each image scale.

Changing the scaleFactor changes what is detected. For example, for the following image: http://graphics8.nytimes.com/images/2013/04/02/world/MOSCOW/MOSCOW-articleLarge-v2.jpg

scaleFactor of 3 --> Gorbachev's face is not detected
scaleFactor of 2 --> Gorbachev's face is detected twice (one larger rectangle containing a smaller one)
scaleFactor of 1.01 ---> Gorbachev's face is detected once

How exactly does this work?

edit retag close merge delete

Sort by » oldest newest most voted

Basically the scale factor is used to create your scale pyramid. More explanation can be found on this link:

In short. Your model has a fixed size defined during training. This means that this size of face is detected in the image if occuring. However, by rescaling the input image, you can resize a larger face towards a smaller one, making it detectable for the algorithm.

Using a small step for resizing, for example 1.05 which means you reduce size by 5 %, you increase the chance of a matching size with the model for detection is found.

more

Thanks for the explanation.

If the scaleFactor is small, does the algorithm still only go through the same number of scalings as when the scaleFactor is large? Or does it adapt to ensure that it shrinks the image down as much as the larger scaleFactor in the last iteration? If the number of scalings remains the same, it would imply that, if the scaleFactor is small, the algorithm does not shrink the image as much. Is that correct?

( 2015-06-23 14:57:49 -0500 )edit
1

No the scale factor defines the percentage of down or upscaling between two levels and thus basically deciding how many levels there will be. By example 1.10 means that each time you downscale the image by 10%. This continues starting from the input image resolution until you reach the model dimension in X or Y.

( 2015-06-24 03:39:50 -0500 )edit

If the model dimension in X is reached first, the process would continue until reach the model dimension in Y?

( 2015-08-08 02:38:31 -0500 )edit

No because the sliding window will take care of the remaining Y dimension.

( 2015-08-09 03:19:15 -0500 )edit

Here it resizes a larger face towards a smaller size. Is the other way round possible? A smaller face is resized to get a larger face in the subsequent scales?

( 2017-06-06 21:13:56 -0500 )edit

We do not upscale because upscaling introduces artefacts. Those artefacts interfere with the detection process.

( 2017-06-07 04:16:08 -0500 )edit