calibrate stereo system without calling cv::stereoCalibrate

I saw a program to calibrate a stereo system (Camera-projector pair). The output of the program is :

1. camera intrinsic matrix
2. camera distortion coefficients
3. camera extrinsic vectors (Obtained by rvecs and tvecs of cv::calibrateCamera)
4. projector intrinsic matrix
5. projector distortion coefficients
6. projector extrinsic vectors (Obtained by rvecs and tvecs of cv::calibrateCamera)

I digged into the code and there was no calling for cv::stereoCalibrate. However, the program uses those 6 outputs to scan 3D objects successfully (the source of the scanning is closed, I have just the calibration part).

How is this possible? How can I achieve rotation matrix and translation vector in order to get the essential matrix from rvecs and tvecs ? What I missing here?

edit retag close merge delete

Sort by » oldest newest most voted

The equations are:

• camera pose transformation matrix: This matrix allows to transform a 3D point expressed in the object frame into the camera frame: • similarly, the projector pose transformation matrix: • the transformation between the camera frame and the projector frame can be calculated as: See a robotic course about homogeneous transformation matrix for more information.

more

Thank you very much for the answer. As I have understand the final C_T_P matrix contains the rotation matrix and translation vector. Is it correct? because I applied your answer but did not get correct results and I want to make sure that I understand correctly. (Most probably my implementation has some bugs but I want to make sure that I understand correctly first).

Hi, i was facing a similar porblem. I wanted to compute an fundamental Matrix from the 2 projection matrizes describes like here. Anyway what really helped me out is this. Just add a few lines of code as stated in the previous link and you can get R and t from r1,t1 and r2, t2. Here (in line 459) you have the principle to obtain essential from R and t. Hope i could help!

Yes cTp is a 4x4 matrix which contains the rotation matrix and translation vector that allows to transform a point expressed in the projector frame to the camera frame (similarly to the stereoCalibrate function that returns R and T which are leftTright or rightTleft, I never know). The best thing to test if your code is correct is use the chessboard images in the OpenCV sample directory and compare the output of stereoCalibrate with the pipeline calibrateCamera -> findChessboardCorners -> solvePnP for both cameras and calculate the transformation matrix between them.

Probably you can calculate the essential matrix directly from R1, T1, R2, T2 as mentioned above.

This is what libmv does here:

void EssentialFromRt(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *E) {
Mat3 R;
Vec3 t;
RelativeCameraMotion(R1, t1, R2, t2, &R, &t);
Mat3 Tx = CrossProductMatrix(t);
*E = Tx * R;
}

And:

void RelativeCameraMotion(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *R,
Vec3 *t) {
*R = R2 * R1.transpose();
*t = t2 - (*R) * t1;
}

Stats

Asked: 2017-07-08 19:30:13 -0500

Seen: 219 times

Last updated: Jul 09 '17