Ask Your Question

Revision history [back]

TL;DR (short answer):

As I understand it, the history parameter just sets the learning rate alpha of the algorithm like: 'alpha = 1 / history', i.e. exponentially decaying the weights of older frames. E.g., setting history=2 means alpha=1/2=0.5, so the newest frame is weighted 50%, the previous frame 25%, the one before the previous 12.5% and so on.. Your history value is likely much larger (the default value is 500, meaning learning rate alpha=1/500=0.002), so the exponential decay of the weights is much flatter. A nice visualization of exponential decay and the associated half-lives (currently) can be found at https://upload.wikimedia.org/wikipedia/commons/8/88/Doubling_time_vs_half_life.svg (where the flattest curve represents alpha=0.01, i.e. history=100)

Btw thanks for asking that question! I've been struggling with it myself recently (and generally with the confusing organization and incomplete nature of OpenCV documentation over the years.. disclaimer: I do deeply love and heavily use OpenCV!,).

Long explanation:

This answer comes 2.5 years late and i hope the original poster has solved their problem, but just to spare/ease others the trouble I attempt to justify/substantiate my answer. For a start, the most recent (and still as incomplete) documentation of the class can be found at https://docs.opencv.org/master/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html (as long as they don't decide to change the organization of the docs again,)). There, two papers are cited that explain the algorithm, the published versions of which reside at https://www.doi.org/10.1109/ICPR.2004.1333992 ['Zivkovic2004'] and https://doi.org/10.1016/j.patrec.2005.11.005 ['Zivkovic2006'], none of which are Open-Access (PDFs can be found searching for the papers e.g. on Google Scholar..) [Side note: I do consider it questionable practice to outsource documentation of Open-Source code to Non-Open-Access publications (yet worlds better than close-sourcing the code,).. Also, imho, OpenCV could at least add such DOI-based permalinks to its bibliography to make things a teeny-weeny little less tedious..] In Zivkovic2004, it says "Instead of the time interval T [the "history" parameter] that was mentioned above, here constant α describes an exponentially decaying envelope that is used to limit the influence of the old data. We keep the same notation having in mind that approximately α = 1/T."

Looking at the source code (which [again at time of writing] be found at https://github.com/opencv/opencv/blob/master/modules/video/src/bgfg_gaussmix2.cpp ), I find the "approximately" refers to the number of already available frames: If fewer than the nframes specified in 'history' are available, the learning rate alpha is set to '1 / (2 * nframes)'. See lines 779 and 869 [at time of writing]:

"learningRate = learningRate >= 0 && nframes > 1 ? learningRate : 1./std::min( 2*nframes, history );"

[My understanding of C++ is limited (being a Python Monty for years now,), maybe it says that as soon nframes is bigger than one, the learning rate alpha is set to 1/history (depending on other parts of the code already setting the learning rate upstream)..]

Also see line 106 stating

"static const int defaultHistory2 = 500; // Learning rate; alpha = 1/defaultHistory2"

Possibly, the real story is more complicated: How each pixel is evaluated every frame is implemented beginning at line 523..

But the above answer thus seems incorrect on both claims:

Claim 1: "It simply states how many previous frames are used for building the background model." -> In theory (in the algorithm), ALL past frames influence the background model, no matter what you set the history parameter to, but their weights decay slower (with a high history value) or faster (exponentially). In practice, computer precision will completely eliminate the (very minor) influence of much older frames at some point. Crucially, past frames don't seem to be stored in memory: Every step, the new frame is just considered for the background model according to the learning rate, leading to the exponential decay of weights.

Claim 2: "So basically if an item is standing at a fixed position for as many frames as the history size, then it will disappear in the background." -> This seems very wrong to me! The original paper (Zivkovic2004) explicitly provides an example here: "[...] where cf is a measure of the maximum portion of the data that can belong to foreground objects without influencing the background model. For example, if a new object comes into a scene and remains static for some time it will probably generate an additional stabile cluster. Since the old background is occluded the weight πB+1 of the new cluster will be constantly increasing. If the object remains static long enough, its weight becomes larger than cf and it can be considered to be part of the background. If we look at (4) we can conclude that the object should be static for approximately log(1 − cf )/ log(1 − α) frames. For example for cf = 0.1 and α = 0.001 we get 105 frames." Not that I claim an understanding of how to compute the value "cf" for any detected object or pixel [I was also just quickly looking for an answer to the original poster's question], this clearly states that with alpha=0.001 (i.e. history=1/alpha=1000), the given pixel is regarded as 'background' if it persists for 105 frames in this case (i.e. almost 10 times less the number of frames specified in the history parameter (in this case)!).

I'm well aware of my own shortcomings and this answer is definitely not based on a full understanding of what's going on in this agorithm in detail, so I would love to be corrected and pointed to a more complete answer/explanation of how the history parameter actually influences the computation of the background model. In a more ideal world, I would wish for the OpenCV docs to fulfill their purpose more fully/inclusively/bareer-free etc., particularly because I do rely on using these incredibly useful libraries on a daily basis. Being a scientist (a biologist in my case) I need to understand/justify/explain the methods I use and in this regard, OpenCV is giving me a hard time (regardless how beautifully well (and fast) it generally works!)