Find significant differences in image
Hi, I'd like to make background segmentation as a first step in my processing pipeline, but I want it to be lightweight and somehow resistant to lighting conditions. I want to find possible regions of interest which I should process with other heavy algorithms.
From tests on my ARM cpu, MOG2 is quite slow and I want to use it later in the pipeline. Also I don't want to update bg in each frame.
Absdiff and thresholding is fast, but I don't know how to make it more resistant to changes in illumination. Any ideas? It doesn't have to be great, but it should improve the results.
I think that periodic bg mask update is not possible in my scenario.
Should I take mean or median of N first frames as a bg mask?
Use
cv::fitLine()
to find affine lighting correction between frames (scale and offset) from either a dense or sparse (for speed) pixel sampling. This affine correction will also flag ouliers via X-standard deviations from the best fit line (both ghosts and movers). If you want to write your own segmentation algorithm (or search GitHub) ViBe is a very simple and lightweight algorithm for background/foreground segmentation. If your camera is moving and the scene is highly non-planar, you're going to have other difficulties that are hard to overcome in a computationally cheap fashion.then BackgroundSubtractor is the wrong class to use.
@Der Luftmensch -- careful, ViBe is patented !
@tobix10 . MOG2 is fast but depending how u coding. @berak stating is not good for u.
@supra56 Maybe it is, but on ARM I have a few FPS (it is paired with some morph op and find contours). At the beginning of pipeline I want something faster to not waste resources if there are no changes in image.
@Der Luftmensch My camera is stationary. Could you give more details on your approach or point to some resources?
To use
cv::fitLine()
, set the input point set not as pixel coordinates, but as pixel value pairs (img1, img2) at some sampling of image locations. This will return the correction to apply to img1 to make it photometrically similar to img2. You might neglect any locations where at least one image is 255 (saturated), though iteratively re-weighted least-squares shouldn't care. For every pixel pair, compute the orthogonal distance to the line and determine the mean and standard deviation of these distances. With the standard deviation assign a probability to each pixel that it is a member of the population that generated the line (or do a simple X% threshold for a binary image). You might then smooth pixel probabilities withcv::ximgproc::fastGlobalSmootherFilter()
.