Ask Your Question

Revision history [back]

Haar Cascade negative images for backgrounds that don't change

Hello all,

I am creating a haar cascade classifier that operates on an image whose background doesnt change.

First a little background. I have created a top down 3D simulation of an animal pen. This pen has a dark grey wooden texture for the floor and a wooden fence surrounding the perimeter. Inside of the pen are animals that are all the same type and more specifically the same model. No other animals, no other objects.

Its pretty apparent what I need to use for my positive training data, the animal. So what ive done is wrote some code to randomly place and rotate my animal around the pen. I then take a bunch of screen shots (100ish) and crop out the animals while trying to get as little background as possible. I have also experimented with making the floor texture just white as to not get any background. Since this is more of an experiment / simulation I can do this.

Where I need some help / advice is with the negative images. For the negative images I'm obviously going to use just the pen with no animals in it.

  1. Since the pen never changes, I first tried using a few hundred of the exact same image. As i expected this didnt really work out that well. I used about 40 positive images (maybe thats not enough)?

Next I tried using the above image, but this time cropping out about 50 different sections of the image. This worked a bit better. I used about 40 positives (may not enough again)?

  1. What do you suggest i use for my negative training data? Should I create a script to crop out 500 random parts of the background? It doesnt really make sense to me to use a bunch of random photos with objects that will never appear in my scene for the negatives

  2. If I want to really reinforce the network to classify this animal as a whole, would it be a good idea to use crops of the background image with just a part of the animal visible for the negatives? Would this make the network more prone to detecting the animal as a whole or does this go against the whole idea of keeping anything you want to detect out of the negative.

I have been thinking that maybe haar casde is over kill for my application and I should just subtract the background from the image using segmentation. But the issue comes with when animals are very close / on top of each other it becomes hard to distinguish them using that method.

Thanks.