Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Normalizing images for opencv_traincascade

I have a set of satellite images with roofs in them. I am using opencv_traincascade to train a ViolaJones detector for roofs.

Because I want to do tranformations (rotations, flips, etc) to the original roofs, I have cut the roofs out from the original images (with 10 pixel padding) and are using those patches as my positive examples. This is in contrast to using the entire image (with multiple roofs in it) and then telling opencv where the roofs are located in the image.

I'm wondering what the right way is to normalize the images: - Is it enough to simply divide the pixels of my roof patches by 255 or does the algorithm perform better if I do something more complicated? - When I perform testing on a held out test set, I assume I will also want to divide the test satellite images by 255, correct?

Normalizing images for opencv_traincascade

I have a set of satellite images with roofs in them. I am using opencv_traincascade to train a ViolaJones detector for roofs.

Because I want to do tranformations (rotations, flips, etc) to the original roofs, I have cut the roofs out from the original images (with 10 pixel padding) and are using those patches as my positive examples. This is in contrast to using the entire image (with multiple roofs in it) and then telling opencv where the roofs are located in the image.

I'm wondering what the right way is to normalize the images:

-
  • Is it enough to simply divide the pixels of my roof patches by 255 or does the algorithm perform better if I do something more complicated?
  • -
  • When I perform testing on a held out test set, I assume I will also want to divide the test satellite images by 255, correct?