Ask Your Question
0

Does BackgroundSubtractorKNN respect a learning rate of 0 correctly?

asked 2019-01-23 04:25:55 -0600

NicholasH gravatar image

I'm getting a strange result where it looks like BackgroundSubtractorKNN algorithm may still be learning when the learning rate parameter is set to zero in the apply function.

Looking at the source code for apply() I see that:

learningRate = learningRate >= 0 && nframes > 1 ? learningRate : 1./std::min( 2*nframes, history );

Kshort=(int)(log(0.7)/log(1-learningRate))+1;

Kmid=(int)(log(0.4)/log(1-learningRate))-Kshort+1;

Klong=(int)(log(0.1)/log(1-learningRate))-Kshort-Kmid+1;

This strikes me odd as the log of 1 being zero, therefore division by zero occurs. Can anyone confirm or correct me?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-02-09 20:57:29 -0600

V gravatar image

Yes, there is a bug and it does result in learning when the learning rate is set to zero. Float division by zero is undefined behavior, but after casting to integer it can go to -2^32 for example. The model update intervals may then be set between 0 and a negative number, which I think results in complete learning of the next frame because if you use apply() twice on the same image with learning rate zero it seems you will always get an empty mask.

If you don't want to patch it a temporary solution is to set the learning rate to a very small number.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2019-01-23 04:25:55 -0600

Seen: 355 times

Last updated: Feb 09 '19