Inverse bilinear interpolation (pupil tracker)
I have build a eye tracking application using openCV and i wish to control the location of the mouse pointer using the location of the left eye pupil.
What i have is four points of the pupil that correspond to the four screen corners. Now i would like to map the current coordinate of the pupil given the four corner positions into a screen coordinate position.
Are there any build in functions in openCV that would let me do this? I already did some research and found that inverse bilinear interpolation would allow me to do this. However i can't seem to find this functionality in opencv for Point2f types.
I have simply used a % of the 2 coordinates (x and y), and had OK results. How are you keeping the eye socket in the same relative position to the screen? I actually tried a heads up display and tried to fit in a camera (so that the eye socket is stationary with respect to both the camera and the screen), but this would prevent widespread use. Using the webcam and a a face tracker It should be possible to find the center of the eye socket at any time (head movement), from which the gaze direction can be calculated. I suspect that the much lower resolution of the webcam will require both eyes to be tracked to increase the resolution. Please share your code once you have it working: a friend of mine has muscular dystrophy, and such a tool would be so appreciated.
Hello I have to develop a pupil tracker for a mouse pointer just as you. Did it worked yet? And could you share your code? That would be great! :)
And also another question, what kind of camera did you use and which camera qualities has it?
Thank you very much for your answers!