Normalizing images for opencv_traincascade

asked 2015-07-10 05:49:16 -0600

angela gravatar image

updated 2015-07-10 05:50:01 -0600

I have a set of satellite images with roofs in them. I am using opencv_traincascade to train a ViolaJones detector for roofs.

Because I want to do tranformations (rotations, flips, etc) to the original roofs, I have cut the roofs out from the original images (with 10 pixel padding) and are using those patches as my positive examples. This is in contrast to using the entire image (with multiple roofs in it) and then telling opencv where the roofs are located in the image.

I'm wondering what the right way is to normalize the images:

  • Is it enough to simply divide the pixels of my roof patches by 255 or does the algorithm perform better if I do something more complicated?
  • When I perform testing on a held out test set, I assume I will also want to divide the test satellite images by 255, correct?
edit retag flag offensive close merge delete

Comments

2

I am not sure that what you want to achieve is best done with Viola and Jones. You are looking for a roof texture which has possible every orientation. In that case I would dig deeper into texture based filters.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-07-10 06:12:43 -0600 )edit

Thanks! I will definitely look into that.

I have actually learnt 3 different detectors with ViolaJones (one for roofs that are aligned diagonally, one for vertical and one for the horizontally aligned). They are giving me a lot of false positives, but were not missing almost no roofs. I am then feeding these detections to a neural network, that classifies them into either a nonroof or two other types of roofs. If I want to continue with this approach, how do you recomment I do with the normalization? Should I be normalizing them? Similar to what I did with the detector, I'm thinking I could normalize the horizontal, diagonal and vertical roofs separately. Does the traincascade prefer something else?

angela gravatar imageangela ( 2015-07-10 13:24:05 -0600 )edit

I thought about this a little longer, and it doesn't seem possible to normalize the test data according to the training data. Let's suppose I do normalize all horizontal roof training patches, and store the normalization factors (the mean and the standard deviation). My test example will be a large image, not just a small patch. If I want to apply the transformation I learnt from the training set patches to the test example I can't do it -- the transformation makes sense only for small patches, but not for the entire image in which I am searching for a roof.

So, what to do about this? Does the opencv_traincascade already take care of doing subtracting the mean and dividing by the std. dev of the training and the testing sets?

angela gravatar imageangela ( 2015-07-10 14:13:06 -0600 )edit