Ask Your Question
0

Stereo re-calibration using OpenCV's findEssentialMat(), recoverPose()

asked 2018-04-17 00:31:09 -0600

xginu gravatar image

I’m trying to re-calibrate a stereo pair, which was previously calibrated using standard chess board images. Over time, temperature changes etc. shift the baseline/rotation of the two cameras. I'm trying to recover stereo calibration using a single left/right camera image of a pattern (like a chess board).

tl;dr: Recover ‘extrinsic’ params for a stereo pair using a single left/right calibration pattern image. Assume camera ‘intrinsics’ stay the same.

What seems like a straightforward series of steps, is giving incorrect results:

// Get corresponding points in 'pixel' coordinates
vector<Point2f> points_left = ...;
vector<Point2f> points_right = ...;

undistortPoints(points_left, points_left, cameraMatrix[0], distCoeffs[0], noArray(), cameraMatrix[0]);
undistortPoints(points_right, points_right, cameraMatrix[1], distCoeffs[1], noArray(), cameraMatrix[1]);

auto E = findEssentialMat(points_left, points_right, cameraMatrix[0],  RANSAC, 0.999, 1.0);

// We want to recover R, t
Mat R_new, t_new;
recoverPose(E, points_left, points_right, cameraMatrix[0], R_new, t_new);

// Compute new R1, R2, P1, P2, Q, as well has adjusted ‘valid ROIs’ r1, r2
Mat R1, R2, P1, P2, Q;
Rect r1, r2;
stereoRectify(cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1],
image_size, R_new, t_new, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 0, image_size, &r1, &r2);

Problem 1) Even though the cameras have moved very slightly, the values of R, t are quite different from original R, t values computed during initial calibration. Especially ‘t’ looks like it's a unit vector! There's no information about this in OpenCV documentation for recoverPose().

Problem 2) I tried using ‘normalized’ output from undistortPoints(), by giving noArray() as the last param in undistortPoints() instead of ‘cameraMatrix’. Then using focal length = 1, principle point = (0,0) in findEssentialMat(). But to no avail, the final result is still wrong.

Am I missing a step somewhere? Are my inputs to findEssentialMat() correct?

edit retag flag offensive close merge delete

Comments

Hi, have you figured out in some way to do that? I'm interested in the same things and we can may help each other to find a solution!

HYPEREGO gravatar imageHYPEREGO ( 2019-04-12 10:50:19 -0600 )edit

1 answer

Sort by » oldest newest most voted
2

answered 2018-10-25 22:14:31 -0600

Rengao Zhou gravatar image

Sign up to answer your question.

And yes, the 't' decomposed from essential matrix should be a unit vector, there is no way you could do with it. The translation 't' is up to a scale factor, which you could not figure out depending on the stereo images merely.

Possible solution would be introducing the scale information of the real world, including using the aprilTag or adding an IMU to your system.

Regarding the rotation 'R', the result of recoverPose() should be correct. However, when you use findEssentialMat(), you are assuming by default that the intrinsics of the pair of cameras, are same. Thus, if the intrinsics differ too much, the rotation 'R' you get could be incorrect.

edit flag offensive delete link more

Comments

Which real world information I can use for estimating the translation vector scale factor? I can use some knowledge regarding my stereo rig to get this information?

HYPEREGO gravatar imageHYPEREGO ( 2019-04-12 10:51:21 -0600 )edit

@HYPEREGO Sorry for the late reply. Basically, as I said, you could take use of aprilTag or chess board, 'cause the size of a chess board square is measurable in the real world. Briefly, use OpenCV to get R and unit vector t, then use image gradient to extract the point of chess board on your images, after that, use OpenCV function triangulate() to 3D reconstruct the 3D points of your chess board in your coordinate system. Now, t(in real world)=t(in your coordinate system)*width(of chess board in read world)/width(of chess board in your coordinate system). Although OpenCV doesn't provide direct function of that, you could write a program yourself, or you could use any calibration project on GitHub.

Additionally, if you are doing self-calibrating of moving object, you could use an IMU.

Rengao Zhou gravatar imageRengao Zhou ( 2019-05-21 02:30:43 -0600 )edit

Thank you for the reply, I solved some of my problem meanwhile, but I'm really grateful for the details. Yeah, perfectly got it, I knew that before but I was probably a little bit confusing for me :P

HYPEREGO gravatar imageHYPEREGO ( 2019-05-23 08:15:36 -0600 )edit

Question Tools

3 followers

Stats

Asked: 2018-04-17 00:31:09 -0600

Seen: 1,694 times

Last updated: Apr 17 '18