I am running solvepnPransac, I get my rvec and tvec Mats back, which are object-space coordinates. I invert them, using this: (from: http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp?rq=1)
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
// fill the last row of T (NOTE: depending on your types, use float or double)
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0; p[3] = 1;
std::cout << T << std::endl;
// T is a 4x4 matrix with the pose of the camera in the object frame
That SHOULD give me the camera pose. But I cant really tell, as it doesn't seem to be additive. What I mean is, the values move a tiny amount as I move the camera, as if they are just showing me the motion per frame, and not adding it to the previous motion.
Is this expected? Or have i done something wrong?