I have a stereo vision application that is attempting to do the following:
Load stereo calibration data from xml
find and match features on left/right frames.
Using the calibration matrices, triangulate points.
Run solvePnp on 2d/3d correspondences
Using the results from solvePnP, UPDATE the projection matrices:
//Calibrated P matrix
cv::Mat Kd(3, 3, CV_32F); // intrinsic parameter matrix
cv::Mat Rd(3, 3, CV_32F); // rotation matrix
cv::Mat Td(4, 1, CV_32F); // translation vector
cv::Mat newP1(4, 3, CV_32F);
cv::decomposeProjectionMatrix(P, Kd, Rd, Td);
cv::Mat T1 = cv::Mat(3, 1, CV_32F);
homogeneousToEuclidean(Td, T1);
//add the new t and r
T1 = T1 + (tvec / 10);
Rd = Rd * R;
projectionFromKRt(Kd, Rd, T1, newP1);
triangulate again.
What i want to see, is that the triangulated points are in the same world space, as the camera has moved. Instead, the points have moved the amount I would expect the camera to have moved. Am I thinking about this wrong? How can i triangulate the points into the same place, with a known camera position?