Ask Your Question
0

BackgroundSubtractorMOG() with images not providing good results

asked 2014-10-31 08:09:58 -0600

So I am trying to use backgroundSubtractorMOG with images instead of a video stream.

My setup:

I have a static camera. The background never changes. However, I do have foreign objects ( people ) that appear in front of the background ( imagine a camera at a cafe and when the cafe is empty, that's the background, and when people go and sit in the cafe, they are the foreign objects that need to be detected).

I am trying to detect the foreign objects in the image. I am trying to use BackGroundSubractorMOG() to basically run through a set of sample images simulating a "video" stream and then at the end of my set of images, I apply the backgroundsubtractor to the image that I am trying to detect the foreign objects in.

So far my tries have not been very successful. I was only able to detect a couple of foreign objects from among 10. My lighting in the pictures does differ sometimes. It also detects some objects that are actually part of the background and have never changed.

Question:

1) I am wondering if there is a way to pre-process the images to better detect the foreign objects

2) Should I first run a set of only background images through the backgroundSubtractorMOG() and then run an image with foreign objects, or should I simply use images with both foreign objects and without. What is the number of data set I need to run through to get good results?

3) Is there any way to setup a backgroundSubtractorMOG() to not have to run through a set of "default" images every single time before I apply it to my specific image?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-10-31 08:38:00 -0600

la lluvia gravatar image

updated 2014-10-31 08:51:55 -0600

BackGroundSubractor is usually used for video because it learns gradually through time what belongs to background and what doesn't. If you have few background images with different illumination than image on which you want to detect people, they will all belong to foreground when BackGroundSubractorMOG is run. You should have in mind that GMM Background Subractor is not illumination invariant. You could ran some background images just before new people show up, so you want have false alarms. Also keep in mind that BackGroundSubractorMOG needs some time to adopt background, you can manage it with learning ratio (Alpha):

static const int defaultHistory2 = 500;

Larger defaultHistory2 -> slower it updates to background. But i advise you to use set of images with difference not larger than few seconds, couse you will have a lot of noise in your foreground mask.

To sum it up:

  1. I wouldn't recommend it.
  2. Yes, you have to teach your BackGroundSubractorMOG what is background first.
  3. You can remember BackGroundSubractorMOG paramethers and run it again, but your foreground mask will be noisy and if illumination is different you could end up with lot of false alarms.
edit flag offensive delete link more

Comments

So why wouldn't some preprocessing be good? Wouldn't it be better to bring a photo to a certain brightness so it will match better when using it in BackGroundSubractor ?

GeorgiAngelov gravatar imageGeorgiAngelov ( 2014-11-01 09:42:39 -0600 )edit

You can try to filter it (change brightness) and then play with your variance for deciding what goes to background and what doesn't. I myself never tried it.

la lluvia gravatar imagela lluvia ( 2014-11-01 11:32:14 -0600 )edit

Question Tools

Stats

Asked: 2014-10-31 08:09:58 -0600

Seen: 511 times

Last updated: Oct 31 '14