I've implemented most of the algorithm outlined here - I can display the detected points on a 2D plane. Now, I'd like to go further and estimate the head's pose from the detected points. In the linked blog post, the author says "[the features] are unprojected and intersected with the virtual cylinder. Exact solution to this ray-cylinder intersection could easily be found on the net." I'm not familiar with this problem, so I don't know how to evaluate Google search results with "ray-cylinder intersection", much less know what that actually means. Could anyone point me in the right direction for research?
A secondary issue is that my program finds "good features" behind me, as well as outlining my head, which the author's implementation doesn't do. Does anyone know how he did this?