Ask Your Question

Create a stereo projection matrix using rvec and tvec?

asked 2017-06-30 05:18:01 -0500

antithing gravatar image

updated 2017-06-30 06:37:33 -0500

I am setting up Projection Matrices in a stereo camera rig, using the intrinsic matrix. Like so:

// Camera 1 Projection Matrix K[I|0]
    cv::Mat P1(3, 4, cv::DataType<float>::type);
    K.copyTo(P1.rowRange(0, 3).colRange(0, 3));  //K is the camera matrix

       float tx = 0.12; //Stereo baseline
    float ty = 0;
    float tz = 0;

    //rotation ( images are rectified, so this is zero)
    float rots[9] = { 1,0,0,0,1,0,0,0,1 };
    cv::Mat R = cv::Mat(3, 3, CV_32F, rots);

    //translation. (stereo camera, rectified images, 12 cm baseline)
    float trans[3] = { tx,ty,tz };
    cv::Mat t = cv::Mat(3, 1, CV_32F, trans);

    // Camera 2 Projection Matrix K[R|t]
        cv::Mat P2(3, 4, CV_32F);
        R.copyTo(P2.rowRange(0, 3).colRange(0, 3));
        t.copyTo(P2.rowRange(0, 3).col(3));
        P2 = K*P2;

This give me:

matrix P1
[333.02081, 0, 318.51651, 0;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]
matrix P2
[333.02081, 0, 318.51651, 39.962498;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]

which seems to work well.

Now, I need to update these matrices using the current camera pose (rvec and tvec).

I am doing this with:

//camera pose
    cv::Mat R(3, 3, cv::DataType<float>::type);
    cv::Rodrigues(rvec, R); // R is 3x3
    R = R.t();  // rotation of inverse
    cv::Mat tvecInv = -R * tvec; // translation of inverse

    //translation. (stereo camera, rectified images, 12 cm baseline)
    float trans[3] = { tx,ty,tz };
    cv::Mat t2 = cv::Mat(3, 1, CV_32F, trans);

    //add the baseline to cam2<float>(0) =<float>(0) + tx;<float>(1) =<float>(1);<float>(2) =<float>(2);

    // Camera 1 Projection Matrix K[I|0]
    cv::Mat P1(3, 4, CV_32F, cv::Scalar(0));
    R.copyTo(P1.rowRange(0, 3).colRange(0, 3));
    tvecInv.copyTo(P1.rowRange(0, 3).col(3));
    P1 = K*P1;

    // Camera 2 Projection Matrix K[R|t]
    cv::Mat P2(3, 4, CV_32F);
    R.copyTo(P2.rowRange(0, 3).colRange(0, 3));
    t2.copyTo(P2.rowRange(0, 3).col(3));
    P2 = K*P2;

Which gives me:

update P1
[-362.74387, -131.39442, 252.00798, -272.50296;
 -9.197648, -374.57056, 8.7753143, -74.796822;
 -0.098709576, -0.4356361, 0.89469415, -0.89352131]
update P2
[-362.74387, -131.39442, 252.00798, 37.901237;
 -9.197648, -374.57056, 8.7753143, 421.92899;
 -0.098709576, -0.4356361, 0.89469415, -0.0064714332]

This is very wrong. What is the correct way to update stereo projection matrices using rvec and tvec values?

thank you.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2017-06-30 05:47:39 -0500

Ice_T02 gravatar image

I am quite confused by the topic "Update a projection matrix from rVec and tVec" but if you are looking for a solution to obtain projection matrix from camera calibration with present intrinsic and extrinsic parameters you can do the following:

cv::Mat computeProjMat(cv::Mat camMat, vector<cv::Mat> rotVec, vector<cv::Mat> transVec)
    cv::Mat rotMat(3, 3, CV_64F), rotTransMat(3, 4, CV_64F); //Init.
    //Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec[0], rotMat);
    //Append translation vector to rotation matrix
    cv::hconcat(rotMat, transVec[0], rotTransMat);
    //Compute projection matrix by multiplying intrinsic parameter 
    //matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
    //Formula: Projection Matrix = A * RT;
    return (camMat * rotTransMat);
edit flag offensive delete link more


Thank you! As i have a stereo setup, I also need to compute the right camera Projection Matrix. As the images are rectified, the only difference should be the 12cm horizontal baseline. Can I just add<float>(0) =<float>(0) + tx; to the second camera case?

antithing gravatar imageantithing ( 2017-06-30 06:21:59 -0500 )edit

Just for clarification you have a stereo camera setup. One camera is calibrated the other one not? Then you want to add the baseline to the translation in order to obtain the tVec from the second camera?

Ice_T02 gravatar imageIce_T02 ( 2017-06-30 06:28:24 -0500 )edit

I am using the Projection matrices to triangulate points. So I need to update them for both cameras (P1, P2) . I have the camera pose for the first (Left) camera, as tvec and rvec

antithing gravatar imageantithing ( 2017-06-30 06:29:39 -0500 )edit

Using the above code, and adding the baseline to the right camera, gives me:

matrix P1
[333.02081, 0, 318.51651, 0;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]
matrix P2
[333.02081, 0, 318.51651, 39.962498;
 0, 333.02081, 171.93558, 0;
 0, 0, 1, 0]

update P1
[-369.0591132677916, -139.4123867199239, 268.2339135065358, -288.9889935830566;
 -9.357930960698575, -363.8086216978145, 12.7896755685808, -76.53825649077694;
 -0.09870957610967065, -0.4356361064148604, 0.8946941390061459, -0.8935213212784
update P2
[-369.0591132677916, -139.4123867199239, 268.2339135065358, 286.0813817703531;
 -9.357930960698575, -363.8086216978145, 12.7896755685808, 97.88704722340324;
 -0.09870957610967065, -0.4356361064148604, 0.89
antithing gravatar imageantithing ( 2017-06-30 06:40:35 -0500 )edit

.... the initial matrices are set manually, as in my question. the updated ones are with your code, after the camera pose has been defined by solvepnp.

antithing gravatar imageantithing ( 2017-06-30 06:41:20 -0500 )edit

I still dont understand what you mean by "update"? You compute the projection matrix once from intrinsic and extrinsic parameters and they stay the same as long as you dont rearange the camera setup or change focal length,... of the camera. If you do this then your camera is uncalibrated. Also if you just have extrinsic parameters you can't compute a projection matrix you also need intrinsic parameters for this. I am also not certain how you obtain intrinsic parameters for the second camera? Do you assume they are exactly the same? Further in order to obtain rVec and tVec from the 2nd camera you need to know the relative rotation/translation from camera 1 to camera 2.

Ice_T02 gravatar imageIce_T02 ( 2017-06-30 06:51:11 -0500 )edit

Sorry if I have been unclear. I have the initial projection matrices for Camera1 and camera2. I triangulate points, using these values. Using these points, I run solvePnp, which gives me the camera pose. Now I need to triangulate points again, using the current camera position, to put the resulting points in the correct space. For this, I need new projection matrices, reflecting the new camera position.

antithing gravatar imageantithing ( 2017-06-30 06:56:05 -0500 )edit

Ok i think from this post i understand your issue. Below some pseudo code:

R ... 3x3 relative rotation between camera 1 and 2
cv::Mat R = R2 * R1.t();
t ... 3x1 relative translation between camera 1 and 2
cv::Mat t = t2 - R * t1;

Now add R and t respectivly to rMat and tVec of camera 1 to get the correct rMat2 and tVec2 for camera2. Then you can compute the projection matrix as given above. Note: relative Rotation and translation should not change so you can compute it once and always add the same values to rMat1 and tVec1.

Ice_T02 gravatar imageIce_T02 ( 2017-06-30 08:43:38 -0500 )edit

Ok, i think I almost have it. thanks again for your time. It seems strange to me that the intrinsic values have changed, is this correct? For example, the fx value in the initial P1 is 333.02081, but the updated one is -369.0591132677916

antithing gravatar imageantithing ( 2017-06-30 09:04:07 -0500 )edit

I dont think you can directly compare projection Matrix from time x against one from time y ... but you can use this in order to see if Intrinsic values changed or not.

Ice_T02 gravatar imageIce_T02 ( 2017-06-30 09:10:01 -0500 )edit

ok! So, here is what I have:

//extrinsics float tx = 0.12;

cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
fs["cameraMatrix"] >> cameraMatrix; 

cv::Mat tvec2;
tvec.copyTo(tvec2);<float>(0) =<float>(0) + tx; //12 cm baseline in the stereo, images are rectified, so R is 0.

cv::Mat P1 = computeProjMat(cameraMatrix, rvec, tvec);
cv::Mat P2 = computeProjMat(cameraMatrix, rvec, tvec2);

with your function above as computeProjMat Weirdly, the resulting matrices look exactly the same, despite the addition of the baseline. What am i still missing?

antithing gravatar imageantithing ( 2017-06-30 09:14:11 -0500 )edit

I assume the error is caused by using two times the same rotation. P1 and P2 will only have in very very rare cases the same rotation against the world. As i suggested you have to get relative rotation and translation from camera 1 and camera 2 and the add this to the result from solvePnP outcome (rVec1 & tVec1) in order to make rVec2 and tVec2 (from camera 2) be based on camera 1.

Ice_T02 gravatar imageIce_T02 ( 2017-06-30 10:07:23 -0500 )edit

Question Tools

1 follower


Asked: 2017-06-30 05:18:01 -0500

Seen: 3,391 times

Last updated: Jun 30 '17