Ask Your Question
0

Extract rotation and translation from Fundamental matrix

asked 2019-01-10 13:26:14 -0600

xavier12358 gravatar image

Hello,

I try to extract rotation and translation from my simulated data. I use simulated large fisheye data.

image description

image description

So I calculate my fundamental matrix : fundamentalMatrix [[ 6.14113278e-13 -3.94878503e-05 4.77387412e-03] [ 3.94878489e-05 -4.42888577e-13 -9.78340822e-03] [-7.11839447e-03 6.31652818e-03 1.00000000e+00]]

But when I extract with recoverPose the rotation and translation I get wrong data:

R = [[ 0.60390422, 0.28204674, -0.74548597], [ 0.66319708, 0.34099148, 0.66625405], [ 0.44211914, -0.89675774, 0.01887361]]),

T = ([[0.66371609], [0.74797309], [0.00414923]])

Even when I plot the epipolar lines with the fundamental matrix the lines don't fit the corresponding point in the next image.

I don't really understand what I do wrong.

fundamentalMatrix, status = cv2.findFundamentalMat(uv_cam1, uv_cam2,cv2.FM_RANSAC, 3, 0.8) cameraMatrix = np.eye(3); i= cv2.recoverPose(fundamentalMatrix, uv_cam1, uv_cam2, cameraMatrix)

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2019-01-12 17:50:05 -0600

Eduardo gravatar image

updated 2019-01-12 18:48:05 -0600

Looks like your points are planar?

If so, it is a degenerate configuration to be able to estimate correctly the essential matrix. Have a look at this course:

Or at these two classical books:

I have reported here the relevant slides from the mentioned course:

image description

image description

image description

Since you are using a fisheye lens, I would undistort the images before computing the fundamental matrix.


Here some experiments with the fundamental / essential matrix and pose recovering:

  • generate 8 3D points in a generic configuration
  • generate an initial camera pose and a second camera pose
  • project the 3D points using the two poses
  • compute the fundamental and essential matrix
  • try to recover the pose
  • compare the pose recovered with the true camera displacement

Code:

#include <iostream>
#include <opencv2/opencv.hpp>

void cameraDisplacement(const cv::Mat& rvec1, const cv::Mat& tvec1, const cv::Mat& rvec2, const cv::Mat& tvec2,
  cv::Mat& rvec1to2, cv::Mat& tvec1to2) {
  cv::Mat R1, R2, R1to2;
  cv::Rodrigues(rvec1, R1);
  cv::Rodrigues(rvec2, R2);
  R1to2 = R2 * R1.t();
  cv::Rodrigues(R1to2, rvec1to2);

  tvec1to2 = -R1to2*tvec1 + tvec2;
}

void compute_R_t_fromEssentialMatrix(const cv::Mat& E, std::vector<cv::Mat>& rvecs, std::vector<cv::Mat>& ts, std::vector<cv::Mat>& ts2) {
  //https://github.com/libmv/libmv/blob/8040c0f6fa8e03547fd4fbfdfaf6d8ffd5d1988b/src/libmv/multiview/fundamental.cc#L302-L338
  cv::Mat w, u, vt;
  cv::SVDecomp(E, w, u, vt, cv::SVD::FULL_UV);

  // Last column of U is undetermined since d = (a a 0).
  if (cv::determinant(u) < 0) {
    u.col(2) *= -1;
  }

  // Last row of Vt is undetermined since d = (a a 0).
  if (cv::determinant(vt) < 0) {
    vt.row(2) *= -1;
  }
  //std::cout << "vt:\n" << vt << std::endl;

  cv::Mat W = (cv::Mat_<double>(3, 3) << 0, -1, 0,
    1, 0, 0,
    0, 0, 1);

  cv::Mat U_W_Vt = u * W * vt;
  cv::Mat U_Wt_Vt = u * W.t() * vt;

  rvecs.resize(4);
  cv::Mat R = U_W_Vt, rvec;
  cv::Rodrigues(R, rvec);
  rvecs[0] = rvec;
  rvecs[1] = rvec;

  cv::Mat R2 = U_Wt_Vt, rvec2;
  cv::Rodrigues(R2, rvec2);
  rvecs[2] = rvec2;
  rvecs[3] = rvec2;

  ts.resize(4);
  ts[0] = u.col(2);
  ts[1] = -u.col(2);
  ts[2] = u.col(2);
  ts[3] = -u.col(2);

  //https://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E
  ts2.resize(4);
  cv::Mat Z = (cv::Mat_<double>(3, 3) << 0, 1, 0,
    -1, 0, 0,
    0, 0, 0);
  cv::Mat tskew = u*Z*u.t();
  ts2[0] = (cv::Mat_<double>(3, 1) << tskew.at<double>(2, 1),
    tskew.at<double>(0, 2),
    tskew.at<double>(1, 0));
  ts2[1] = -ts[0];
  ts2[2] = ts[0];
  ts2[3] = -ts[0];
}

void transform(const cv::Point3d& pt, const cv::Mat& rvec, const cv::Mat& tvec, cv::Point3d& ptTrans) {
  cv::Mat R;
  cv::Rodrigues(rvec, R);

  cv::Mat matPt = (cv::Mat_<double>(3, 1) << ...
(more)
edit flag offensive delete link more

Comments

Thank you really push for the explanation. Do you think we could do the same with Mei calibrated lens model? The fundamental matrix calculation is the same but the essential matrix is less easy to find.

xavier12358 gravatar imagexavier12358 ( 2019-01-13 10:08:47 -0600 )edit

Sorry, I don't have any experience with wide-angle lens.

Maybe you can try to look at a library dedicated to structure from motion or visual odometry to see how they handle fisheye lens?

Eduardo gravatar imageEduardo ( 2019-01-14 02:54:52 -0600 )edit

I will see for the omnidirectional and fisheye problems. I modify my program with 3D data (not planar data) and the estimation is not bad except for the Z axe. My transform is R = [0, 0, 0] and T = [ 1, -1 , -2] and my estimation is R = [[ 1.00000000e+00, 3.85740245e-08, 1.05487079e-06], [-3.85749802e-08, 1.00000000e+00, 9.06028045e-07], [-1.05487076e-06, -9.06028085e-07, 1.00000000e+00]]),

T =([[-0.65188513], [ 0.75831535], [ 0.00190054]]

I think the data is estimate upd to a scale value so this is the reason why the x and y value is not equal to 1 and -1 but the Z value of the translation is incorrect. Do you know why?

xavier12358 gravatar imagexavier12358 ( 2019-01-14 07:31:45 -0600 )edit

Yes translation can only be estimated up to a scale factor.

I don't have this issue when I used no rotation. Maybe check that the fundamental matrix is correctly estimated by computing the error x2'Fx1?

Eduardo gravatar imageEduardo ( 2019-01-15 03:02:17 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-01-10 13:26:14 -0600

Seen: 6,713 times

Last updated: Jan 12 '19