Ask Your Question

How does the parameter scaleFactor in detectMultiScale affect face detection?

asked 2013-04-03 10:38:55 -0500

mantithetical gravatar image

I am trying out a slight variation of the example from

    CascadeClassifier faceDetector = new CascadeClassifier("/haarcascade_frontalface_default.xml");
    Mat image = Highgui.imread(originalFile.getAbsolutePath());
    MatOfRect faceDetections = new MatOfRect();
    double w = ((double)originalCrop.getWidth());
    double h = ((double)originalCrop.getHeight());
    faceDetector.detectMultiScale(image, faceDetections, 3, 1, 
                   Objdetect.CASCADE_DO_CANNY_PRUNING , new Size(w/16, h/16), new Size(w/2, h/2));

From the API: scaleFactor – Parameter specifying how much the image size is reduced at each image scale.

Changing the scaleFactor changes what is detected. For example, for the following image:

scaleFactor of 3 --> Gorbachev's face is not detected
scaleFactor of 2 --> Gorbachev's face is detected twice (one larger rectangle containing a smaller one)
scaleFactor of 1.01 ---> Gorbachev's face is detected once

How exactly does this work?

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted

answered 2013-04-04 02:51:14 -0500

Basically the scale factor is used to create your scale pyramid. More explanation can be found on this link:

In short. Your model has a fixed size defined during training. This means that this size of face is detected in the image if occuring. However, by rescaling the input image, you can resize a larger face towards a smaller one, making it detectable for the algorithm.

Using a small step for resizing, for example 1.05 which means you reduce size by 5 %, you increase the chance of a matching size with the model for detection is found.

edit flag offensive delete link more


Thanks for the explanation.

If the scaleFactor is small, does the algorithm still only go through the same number of scalings as when the scaleFactor is large? Or does it adapt to ensure that it shrinks the image down as much as the larger scaleFactor in the last iteration? If the number of scalings remains the same, it would imply that, if the scaleFactor is small, the algorithm does not shrink the image as much. Is that correct?

angela gravatar imageangela ( 2015-06-23 14:57:49 -0500 )edit

No the scale factor defines the percentage of down or upscaling between two levels and thus basically deciding how many levels there will be. By example 1.10 means that each time you downscale the image by 10%. This continues starting from the input image resolution until you reach the model dimension in X or Y.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-06-24 03:39:50 -0500 )edit

If the model dimension in X is reached first, the process would continue until reach the model dimension in Y?

Sheng Liu gravatar imageSheng Liu ( 2015-08-08 02:38:31 -0500 )edit

No because the sliding window will take care of the remaining Y dimension.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-08-09 03:19:15 -0500 )edit

Here it resizes a larger face towards a smaller size. Is the other way round possible? A smaller face is resized to get a larger face in the subsequent scales?

divyquery gravatar imagedivyquery ( 2017-06-06 21:13:56 -0500 )edit

We do not upscale because upscaling introduces artefacts. Those artefacts interfere with the detection process.

StevenPuttemans gravatar imageStevenPuttemans ( 2017-06-07 04:16:08 -0500 )edit

Question Tools


Asked: 2013-04-03 10:38:55 -0500

Seen: 50,360 times

Last updated: Apr 04 '13