Learning the background model using images
I have a set of images of a scene taken under slightly changing lighting conditions and objects in the scene move slightly from their positions (SET 1). I have another set of images where one or more objects in the scene are missing or misaligned (SET 2).
I want to train a background subtraction model by learning a background model using SET 1 and find the missing or misaligned objects in SET 2 using the background model. Is there a way to do this using OpenCV?
i'm sceptic, that this idea will work.
there is no explicit seperation between train/predict functionality, so it will also learn from SET2
You can extract the modeled background from the OpenCV BackgroundSubtractors. https://docs.opencv.org/4.2.0/d7/df6/...
So if you feed it all of SET1, then get the background image, it won't learn.
Alternatively, set the learning rate in the apply method to 0 for the SET2 images.