advice for a hand tracking algorithm
Hello. I am currently working on a college project where I have to design a program for the AR.Drone 2.0 to track my hand with the front video camera and follow it. I am currently using OpenCV 2.4.10 and have managed detect the hand using a combination of skin and SURF detectors. The method is sufficiently accurate when the drone is hovering however it's too slow for tracking.
After my detection phase, I draw two bounding boxes around the hand: one that encloses the hand and another slightly larger one that becomes my region of interest. My thinking is that if the hand is fully enclosed by the bounding box in frame n, in frame n+1 it will have moved slightly beyond that box but will still be found within the second bounding box.
Currently I am tracking the hand inside the region of interest purely by applying skin detection and redrawing the bounding boxes based on the hand contours. It's very fast, however, when other objects with skin (such as my face, other hands etc) enter the region of interest it's impossible to distinguish between the two or more objects.
I am looking for some advice from your experience on a tracking algorithm for my hand that will work inside the bounding box and is not too computationally demanding (it has to work on live video) and somewhat easy for a beginner in computer vision to implement. From what I've read and seen on the internet I could try to use meanshift, camshift, FAST or something similar but I really don't know which one to try. I'm somewhat tight on time and I want to avoid experimenting with different algorithms only to see them fail so I am asking for your advice on this matter.
As suggested below, you need to use a tracker in between series of detections and use the prediction function. That way if you loose a detection, your tracker will now where it went.