Doubt about BackgroundSubtractorMOG2 apply method
This is of course an opencv's noob question but I need to clarify something about BackgroundSubtractorMOG2 and how to use it.
I am looking more in depth the source code for BackgroundSubtractorMOG2 class, particularly the apply method:
void BackgroundSubtractorMOG2Impl::apply(InputArray _image, OutputArray _fgmask, double learningRate)
{
CV_INSTRUMENT_REGION()
bool needToInitialize = nframes == 0 || learningRate >= 1 || _image.size() != frameSize || _image.type() != frameType;
if( needToInitialize )
initialize(_image.size(), _image.type());
#ifdef HAVE_OPENCL
if (opencl_ON)
{
CV_OCL_RUN(_image.isUMat(), ocl_apply(_image, _fgmask, learningRate))
opencl_ON = false;
initialize(_image.size(), _image.type());
}
#endif
Mat image = _image.getMat();
_fgmask.create( image.size(), CV_8U );
Mat fgmask = _fgmask.getMat();
++nframes;
learningRate = learningRate >= 0 && nframes > 1 ? learningRate : 1./std::min( 2*nframes, history );
CV_Assert(learningRate >= 0);
parallel_for_(Range(0, image.rows),
MOG2Invoker(image, fgmask,
bgmodel.ptr<GMM>(),
(float*)(bgmodel.ptr() + sizeof(GMM)*nmixtures*image.rows*image.cols),
bgmodelUsedModes.ptr(), nmixtures, (float)learningRate,
(float)varThreshold,
backgroundRatio, varThresholdGen,
fVarInit, fVarMin, fVarMax, float(-learningRate*fCT), fTau,
bShadowDetection, nShadowDetection),
image.total()/(double)(1 << 16));
}
which is the method to call when we want to generate the foreground.
When using it in a infinite loop, it is correct that I should check for the new foreground every nframes times I calls the apply method ?
Have you try this tutorial?
Yes, but that example doesn't solve completely my doubt.