How to improve the performance of a Haar cascade classifier during training?

asked 2018-05-03 12:06:23 -0500

RazeK gravatar image

Hi to everyone!!!

I'm trying to train a Haar cascade classifier to detect "balls" of a SAG mill. Specifically the images are from lateral faces of the mill and they're in grayscale. Each of the original images has an app resolution of 2000x1500 pixels and each "ball" falls inside of a bounding box of 20x20 pixels.

ROI of an Original Image:Random ROI of an Original Image

After some pre-processing, I annotated manually each ball present in the original images, so I can obtain a positive dataset of app 2300 images of 20x20 pixels each one.

Positive Dataset Generation:Positive Dataset Generation

Positive Dataset Example:

Positive Dataset Example

The negative dataset was generated from the original images with the regions where there are balls filled with black. Specifically I used a sliding window on the original images that allowed me to cut out an app 13,000 images of 50x50 pixels without balls (I do not shure if this last step is correct.)

Negative Dataset Example:

Negative Dataset Example

Then I use the opencv_createsamples tool setting the -w and -h parameters to 15pixels (all balls bounding boxs are of 20 pixels). I read in the documentation and in some books like (OpenCV 3 Blueprints) that this set the minimum detectable object size. I was performed test using the remainder parameters as default and setting them to zero (maxxangle, maxyangle, maxzangle) since in the original images the balls always have the same perspective and size.

I read in the same book that it is not convenient to perform the "data augmentation" if the positive images do not have a "clean" background to be considered transparent, as in this case.

Finally I run the opencv_traincascade tool with the following parameters:

  1. -numStages 20 (usually used in the documentation),
  2. -minHitRate 0.999 (recomended in the documentation),
  3. -maxFalseAlarmRate 0.5 (recomended in the documentation)
  4. -numPos 1904 (total_positive_images*0.85 as recommended in the mentioned book)
  5. -numNeg 650 (total_negative_images/num_of_stages since the negative images stop being considered in the training once they are correctly classified by a stage)
  6. -w 15
  7. -h 15
  8. -mode ALL
  9. -precalcValBufSize 4096
  10. -precalcIdxBufSize 4096

By varying some parameters such as: numPos, numNeg, my results are not better than what is seen in the following image:

Obtained Results:

Obtained Results

What recommendations would you give me to try to improve the performance of the classifier (I am doing something wrong?), or maybe I should use some other approach, maybe DeepLearning?

edit retag flag offensive close merge delete