Haar Cascade negative images for backgrounds that don't change

asked 2017-06-26 08:49:46 -0500

Hello all,

I am creating a haar cascade classifier that operates on an image whose background doesnt change.

First a little background. I have created a top down 3D simulation of an animal pen. This pen has a dark grey wooden texture for the floor and a wooden fence surrounding the perimeter. Inside of the pen are animals that are all the same type and more specifically the same model. No other animals, no other objects.

Its pretty apparent what I need to use for my positive training data, the animal. So what ive done is wrote some code to randomly place and rotate my animal around the pen. I then take a bunch of screen shots (100ish) and crop out the animals while trying to get as little background as possible. I have also experimented with making the floor texture just white as to not get any background. Since this is more of an experiment / simulation I can do this.

Where I need some help / advice is with the negative images. For the negative images I'm obviously going to use just the pen with no animals in it.

  1. Since the pen never changes, I first tried using a few hundred of the exact same image. As i expected this didnt really work out that well. I used about 40 positive images (maybe thats not enough)?

Next I tried using the above image, but this time cropping out about 50 different sections of the image. This worked a bit better. I used about 40 positives (may not enough again)?

  1. What do you suggest i use for my negative training data? Should I create a script to crop out 500 random parts of the background? It doesnt really make sense to me to use a bunch of random photos with objects that will never appear in my scene for the negatives

  2. If I want to really reinforce the network to classify this animal as a whole, would it be a good idea to use crops of the background image with just a part of the animal visible for the negatives? Would this make the network more prone to detecting the animal as a whole or does this go against the whole idea of keeping anything you want to detect out of the negative.

I have been thinking that maybe haar casde is over kill for my application and I should just subtract the background from the image using segmentation. But the issue comes with when animals are very close / on top of each other it becomes hard to distinguish them using that method.


edit retag flag offensive close merge delete


cascades are for rigid things. for real animals, which can take any kind of pose / orientation, this will fare poorly (if not totally unusable)

berak gravatar imageberak ( 2017-06-26 09:05:37 -0500 )edit

Interesting. Sounds like I should continue with my contour edge detection method then instead of spending anymore time on this. Any more suggestions or comments welcome.

binsky3333 gravatar imagebinsky3333 ( 2017-06-26 09:56:14 -0500 )edit

you never mention, what the whole thing is supposed to achieve.

berak gravatar imageberak ( 2017-06-26 10:00:47 -0500 )edit

I would like to detect these animals and be able to track them. I have been successful in my contour approach, but face issues when the animals are very close to each other / laying on each on each other(in real life). The point of the simulated environment is to experiment with different techniques while I wait for actual real life data of the pen and animals.

binsky3333 gravatar imagebinsky3333 ( 2017-06-26 10:05:56 -0500 )edit

difficult task, honestly. (also, easy to say: "won't work", but coming up with a solution is an entirely different story)

berak gravatar imageberak ( 2017-06-26 10:09:58 -0500 )edit