Feature extraction with variable backgrounds [closed]
I am trying to create an iOS app that is able to recognize simple hand gestures. I am currently using SURF, Bag-of-words, and SVMs to do this. I trained my SVM in bright lighting with a blank (white) background, and my app currently works when used in similar environments. I want it to be able to work with any background. For example, it doesn't work when I use it in my school's lab, which has pipes running along the ceiling (i.e. the background).
I thought about using the BackgroundSubtractorMOG algorithm to remove the background, but I don't know how effective that will be, since over time the algorithm relegates the center of the hand to the background, since it doesn't look it is moving.
How can I get my feature extraction and classifier working with variable backgrounds?