1 | initial version |
Take a look at: http://wiki.ros.org/pi_face_tracker
I think that your idea is more or less based on this tracking algorithm.
Once a face is detected (Viola & Jones), the "GoodFeaturesToTrack" filter is run over the face region. These points are then fed to the Lucas-Kanade Optical Flow Tracker, which follows the points from one frame to the next. Two additional methods are run to maintain the integrity of the tracked feature cluster.
feature points are pruned (dropped) if they lie too far outside the current cluster in the x-y camera plane or in depth
new feature points are added by expanding the current cluster ROI by a small amount (10%) and running "GoodFeaturesToTrack" over the expanded region. If the feature cluster shrinks below a specified minimum number of points, the algorithm returns to the Haar face detector and scans the video to relocate the face.
Video: https://www.youtube.com/watch?v=Yw_zkLaZNsQ