Background subtraction from single static image
Are there methods that attempt to differentiate foreground and background in a single static image?
Standard OpenCV background subtraction works by correlating multiple images, but in this case, I have only a single image. I do realize that the methodology would be completely different, but I'm not sure what other terms to use other than foreground and background. (So appropriate keywords for a search would be great)
The background image in this case is fairly monotonous. Think 'wall with some noise.' The value (luminance) of some of the 'foreground' objects can be relatively close to the background, but I'm guessing that some relatively sophisticated algorithms have been developed to deal with this.
keywords: image segmentation
(opencv has grabcut, watershed for this. also, neural networks can be trained for this purpose)
Thanks, berak. Do you know of any examples of neural nets being used for similar apps? I've been trying to figure out how to use a convo net to simply identify the foreground images, which would solve the problem. But there are too many variations in foreground objects to do that directly.
If you know of an approach to identifying background pixels with a neural net, I'd love to hear more about it.
this post may help you