Using solvePnp on a video stream with changing points.
I am using a stereo camera to triangulate 3d points from rectified images. I then use these points, and the found keypoints, to run solvePnp. Everything runs, the 3d points look correct, the projected points look good. But the returned camera pose does not. It Jumps around, and gives incorrect values.
My workflow is:
grab stereo frames.
find keypoints in previous (left) frame, and current (left) and (right ) frames.
match the previous (left) frame with the current (left) frame.
match the (matched in previous step) left descriptors with the current(right) descriptors.
Triangulate points from matched stereo keypoints.
use Left camera keypoints and triangulated 3d points to run solvePnp.
invert the rvec and tvec values to get the camera pose.
repeat.
I have checked the 3d points in a 3d application, and am projecting them back to the camera frame. they look good.
I use the same keypoints that i triangulate with as the imagepoints, so the correspondences are good.
The 3d points are in Camera-space, as that is what triangulatePoints returns.
the calibration data is good.
I notice that even though I am matching the previous frame to the current, when i look at the 3d point setsfor consecutive frames, they do not align. For example, the first point in the set is in a different location, from frame 1 to frame 2.
The camera pose, inverted or not, jumps around between -1 and 1, and does not change as the camera moves.
What am i missing?
I have tried flipping the 3d points to object space every frame, adding the tvec and rvec every frame, and i see the same result.