Ask Your Question

Revision history [back]

Basically what you expect OpenCV to do and what the model your using does are 2 different things.

  1. The frontal face model is a model that is trained with data of aligned upright faces of 24x24 pixels. It can thus not detect anything smaller than 24x24 pixels, and will work best on frontal upright faces as test images. This is due to the current OpenCV implementation.
  2. First thing that goes wrong is tilted heads. Since the model does not have that in its training data it will never work. This means you need a rotation invariant face detector. How to achieve that using this model has been explained by me multiple times before on this forum AND is also explained in the OpenCV 3 Blueprints book, chapter 5.
  3. The image with the small faces, go ahead and upscale it, for example 2 or 3 times. All at once, faces will be detected.
  4. OpenCV face models are racist ... simply said, there are no black and asian people in its training set, thus the chance of them getting detected is fairly low. You need models trained with black and asian people to achieve this.
  5. The last image contains different face poses, again not covered in the model. This mean you will need extra models covering these positions, either by training them yourself or by finding them somewhere online.

Overall, opencv face detection might be to strict for your goal, which needs much more robust and more complicated algorithms to get a better and higher detection rate.