Find significant differences in image

asked 2019-03-28 08:36:14 -0500

tobix10 gravatar image

updated 2019-03-28 08:43:24 -0500

Hi, I'd like to make background segmentation as a first step in my processing pipeline, but I want it to be lightweight and somehow resistant to lighting conditions. I want to find possible regions of interest which I should process with other heavy algorithms.

From tests on my ARM cpu, MOG2 is quite slow and I want to use it later in the pipeline. Also I don't want to update bg in each frame.

Absdiff and thresholding is fast, but I don't know how to make it more resistant to changes in illumination. Any ideas? It doesn't have to be great, but it should improve the results.

I think that periodic bg mask update is not possible in my scenario.

Should I take mean or median of N first frames as a bg mask?

edit retag flag offensive close merge delete

Comments

Use cv::fitLine() to find affine lighting correction between frames (scale and offset) from either a dense or sparse (for speed) pixel sampling. This affine correction will also flag ouliers via X-standard deviations from the best fit line (both ghosts and movers). If you want to write your own segmentation algorithm (or search GitHub) ViBe is a very simple and lightweight algorithm for background/foreground segmentation. If your camera is moving and the scene is highly non-planar, you're going to have other difficulties that are hard to overcome in a computationally cheap fashion.

Der Luftmensch gravatar imageDer Luftmensch ( 2019-03-28 08:44:17 -0500 )edit

Also I don't want to update bg in each frame.

then BackgroundSubtractor is the wrong class to use.

berak gravatar imageberak ( 2019-03-28 08:48:27 -0500 )edit

@Der Luftmensch -- careful, ViBe is patented !

berak gravatar imageberak ( 2019-03-28 08:49:30 -0500 )edit

@tobix10 . MOG2 is fast but depending how u coding. @berak stating is not good for u.

supra56 gravatar imagesupra56 ( 2019-03-28 09:06:18 -0500 )edit

@supra56 Maybe it is, but on ARM I have a few FPS (it is paired with some morph op and find contours). At the beginning of pipeline I want something faster to not waste resources if there are no changes in image.

@Der Luftmensch My camera is stationary. Could you give more details on your approach or point to some resources?

tobix10 gravatar imagetobix10 ( 2019-03-28 09:51:38 -0500 )edit

To use cv::fitLine(), set the input point set not as pixel coordinates, but as pixel value pairs (img1, img2) at some sampling of image locations. This will return the correction to apply to img1 to make it photometrically similar to img2. You might neglect any locations where at least one image is 255 (saturated), though iteratively re-weighted least-squares shouldn't care. For every pixel pair, compute the orthogonal distance to the line and determine the mean and standard deviation of these distances. With the standard deviation assign a probability to each pixel that it is a member of the population that generated the line (or do a simple X% threshold for a binary image). You might then smooth pixel probabilities with cv::ximgproc::fastGlobalSmootherFilter().

Der Luftmensch gravatar imageDer Luftmensch ( 2019-03-28 10:35:30 -0500 )edit