Epipolar geometry pose estimation: Epipolar lines look good but wrong pose
I am trying to use OpenCV to estimate one pose of a camera relative to another, using SIFT feature tracking, FLANN matching and subsequent calculations of the fundamental and essential matrix. After decomposing the essential matrix, I check for degenerate configurations and obtain the "right" R and t.
Problem is, they never seem to be right. I am including a couple of image pairs:
- Image 2 taken with 45 degree rotation along the Y axis and same position w.r.t. Image 1.
- Image 2 taken from approx. couple of meters away along the negative X direction, slight displacement in the negative Y direction. Approx. 45-60 degree rotation in camera pose along Y axis.
The translation vector in the second case, seems to be overestimating the movement in Y and underestimating the movement in X. The rotation matrices when converted to Euler angles give wrong results in both the cases. This happens with a lot of other datasets as well. I have tried switching the fundamental matrix computation technique between RANSAC, LMEDS etc., and am now doing it with RANSAC and a second computation using only the inliers with the 8 point method. Changing the feature detection method does not help either. The epipolar lines seem to be proper, and the fundamental matrix satisfies x'.F.x = 0
Am I missing something fundamentally wrong here? Given the program understands the epipolar geometry properly, what could possibly be happening that results in a completely wrong pose? I am doing the check to make sure points lie in front of both cameras. Any thoughts/suggestions would be very helpful. Thanks!
Code for reference