Ask Your Question
0

Epipolar geometry pose estimation: Epipolar lines look good but wrong pose

asked 2015-07-31 13:35:56 -0600

saihv gravatar image

I am trying to use OpenCV to estimate one pose of a camera relative to another, using SIFT feature tracking, FLANN matching and subsequent calculations of the fundamental and essential matrix. After decomposing the essential matrix, I check for degenerate configurations and obtain the "right" R and t.

Problem is, they never seem to be right. I am including a couple of image pairs:

  1. Image 2 taken with 45 degree rotation along the Y axis and same position w.r.t. Image 1.

Image pair

Result

  1. Image 2 taken from approx. couple of meters away along the negative X direction, slight displacement in the negative Y direction. Approx. 45-60 degree rotation in camera pose along Y axis.

Image pair

Result

The translation vector in the second case, seems to be overestimating the movement in Y and underestimating the movement in X. The rotation matrices when converted to Euler angles give wrong results in both the cases. This happens with a lot of other datasets as well. I have tried switching the fundamental matrix computation technique between RANSAC, LMEDS etc., and am now doing it with RANSAC and a second computation using only the inliers with the 8 point method. Changing the feature detection method does not help either. The epipolar lines seem to be proper, and the fundamental matrix satisfies x'.F.x = 0

Am I missing something fundamentally wrong here? Given the program understands the epipolar geometry properly, what could possibly be happening that results in a completely wrong pose? I am doing the check to make sure points lie in front of both cameras. Any thoughts/suggestions would be very helpful. Thanks!

Code for reference

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-08-01 09:39:39 -0600

Eduardo gravatar image

updated 2015-08-01 09:45:23 -0600

Not really an answer but my comment doesn't fit.

I don't know Python well nevertheless I will try to help a little maybe:

  • in function in_front_of_both_cameras, you find the z coordinate using the intersection between the two light rays ? If it is not already done, you could check the formula on simple cases ?
  • I don't understand this line: first_3d_point = np.array([first[0] * first_z, second[0] * first_z, first_z]), for me it sould be: first_3d_point = np.array([first[0] * first_z, first[1] * first_z, first_z]).
  • normally it should be Ok, but you could try to print the rotation and translation values for each configuration to see if the pose check is Ok or not (also if there is one configuration that gives correct values).
  • for the Euler angles, what configuration did you use ? I use usually for roll, pitch, yaw the configuration Z1Y2X3. You can also use RQDecomp3x3 directly.
edit flag offensive delete link more

Comments

Hi Eduardo, thanks for the insight. About the first two points, I did make a mistake in point 2, thanks for spotting it. And I was referring to this answer when I coded it, I believe that formula is from H&Z. http://answers.opencv.org/question/27...

About point 3, I have had cases where none of the 4 configurations made proper sense. So I was confused about whether I am expecting too much from the method, or whether my features are not good enough.. But again, the epipolar lines looked proper. I am going to check with the Euler angle conversion changed. I was using X1Y2Z3 (again, from a response to another question on this forum)

saihv gravatar imagesaihv ( 2015-08-01 20:50:46 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-07-31 13:35:56 -0600

Seen: 1,616 times

Last updated: Aug 01 '15