Where to set origin of model for solvePnP
Hi everyone, I've been figuratively tearing my hair out over this problem for over a month now. I have to admit that I haven't taken a good hard look at the math behind solvePnP (most people don't, which is why we have open source libraries), because reasons.
I'd like to ask for some practical advice regarding using solvePnP with regards to where to position the origin for the model.
Background: I'm using OpenCV to make a head tracking app. Tracking the objects (4 of them) on screen isn't really the problem; figuring out the 6DoF is.
Camera: Mobile phone camera. I haven't performed calibration, but I figure that as long as the subject keeps their head in the center of the view, it shouldn't be a big problem. I do notice significant barrel distortion towards the edges though.
Parameters fed into solvePnP: Image points -- from tracker (origin in lower-left). Model points -- from 3D model (origin in lower-left, translated to OpenCV upper-left origin). Camera matrix -- fx,fy max dimension of image, principal point at center of image, no skew (may change if I do calibration). Using ITERATIVE with extrinsicGuess (because it's very jittery without it).
So the specific advice that I want to ask: Where should I set the origin of the model (in this case, of a human head) to get consistent 6DoF values? Ideally, I want to get the rotation values "anchored" inside the neck (about a few centimeters deep from the anterior). I've tried translating the head model points to have the origin inside the neck, but I notice that z-translation values go bonkers when a significant portion of the model is in the -z range. Also, it becomes unable to resolve x-rotation properly (it gives me positive values whether I look up or down).
Should I be placing the model in a particular octant of the model space? Should I avoid having any part of my model in the -z range, or should I expect solvePnP to return a -z translation for such a situation?