Ask Your Question
0

triangulate 3d points from a stereo camera and chessboard.

asked 2016-12-05 07:24:03 -0600

antithing gravatar image

Hi, I have a calibrated stereo camera, so i have the extrinsics, intrinsics and dist cooeffs.

I have a chessboard, and using :

bool findChessboardCornersAndDraw(Mat inputLeft, Mat inputRight) {
    _leftOri = inputLeft;
    _rightOri = inputRight;
    bool foundLeft = false, foundRight = false;
    cvtColor(inputLeft, inputLeft, COLOR_BGR2GRAY);
    cvtColor(inputRight, inputRight, COLOR_BGR2GRAY);
    foundLeft = findChessboardCorners(inputLeft, boardSize, cornersLeft, CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);
    foundRight = findChessboardCorners(inputRight, boardSize, cornersRight, CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);
    drawChessboardCorners(_leftOri, boardSize, cornersLeft, foundLeft);
    drawChessboardCorners(_rightOri, boardSize, cornersRight, foundRight);
    _leftOri = displayMode(_leftOri);
    _rightOri = displayMode(_rightOri);
    if (foundLeft && foundRight) {
        return true;
    }
    else {
        return false;
    }
}

I get the chessboard points. What i need to do now, is write a function that returns the 3d triangulated coordinates of those points, in relation to the camera.

Is there a function existing that will do this for me? Should I use:

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

Thank you!

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2016-12-05 09:21:39 -0600

Ice_T02 gravatar image

updated 2016-12-05 09:38:39 -0600

Hello,

Yes you can use cv::trinagulatePoints(). Anyway to use cv::triangulatePoints(), you require the following:

  • Camera Matrix 1/2
  • distortion Coefficients 1/2
  • rotation Vector 1/2
  • translation Vector 1/2

The typicall procedure is as following:

1) Get camera calibration parameters

Descriped like here

2) Undistort your points of interrest (use: cv::undistortPoints())

Store all points in Coords1/2. The undistortet points will be returned in UndistCoords1/2

vector<cv::Vec2d> Coords1, UndistCoords1;
vector<cv::Vec2d> Coords2, UndistCoords2;
cv::undistortPoints(Coords1, undistCoords1, camMat1, distCoeff1, cv::noArray(), camMat1);
cv::undistortPoints(Coords2, undistCoords2, camMat2, distCoeff2, cv::noArray(), camMat2);

3) Compute projection matrix for camera 1/2

Before we can tringulate the undistortet points we have to compute the projection Matrix for each camera. This will be done as following:

cv::Mat computeProjMat(cv::Mat camMat, vector<cv::Mat> rotVec, vector<cv::Mat> transVec)
{
    cv::Mat rotMat(3, 3, CV_64F), RTMat(3, 4, CV_64F);
    //1. Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec.at(0), rotMat);
    //2. Append translation vector to rotation matrix
    cv::hconcat(rotMat, transVec.at(0), RTMat);
    //3. Compute projection matrix by multiplying intrinsic parameter 
    //matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
    return (camMat * RTMat);
}

!!! Do this for both cameras !!!

4) Triangulate your points

cv::Mat triangCoords4D;
cv::triangulatePoints(projMat1, projMat2, undistCoords1, undistCoords2, triangCoords4D);

!!! Be aware the output points are 4D not 3D !!!

To get 3D coordinates you can do the following:

cv::Vec4d triangCoords1 = triangCoords4D.col(0);
cv::Vec4D trinagCoords2 = triangCoords4D.col(1);
cv::Vec3d Coords13D, Coords23D;
for (unsigned int i = 0; i < 3; i++) { 
        Coords13D[i] = triangCoords1[i] / triangCoords1[3];
    Coords23D[i] = triangCoords2[i] / triangCoords2[3];
}

Hope this helps.

edit flag offensive delete link more

Comments

Thank you! Excellent answer.

antithing gravatar imageantithing ( 2016-12-06 04:14:00 -0600 )edit

You are welcome

Ice_T02 gravatar imageIce_T02 ( 2016-12-06 04:37:28 -0600 )edit

Hi! sorry to bother you again...I am trying to implement this, can you explain something for me?

in computeProjMat, what are the vector<cv::Mat> rotVec, vector<cv::Mat> transVec values? Is this the extrinsics between the stereo cameras? I have:

baseline 120 rx 0.000927364 rz -0.000665997

(it is a pre calibrated stereo camera, hard mounted)

How to convert these values to the rVec, tVec?

Thanks!

stillNovice gravatar imagestillNovice ( 2016-12-07 15:25:12 -0600 )edit
1

Basically you take the rotation and translation vector from step 1 and compute the projection matrix out of it. cv::Rodrigues converts the rotation vector into a rotation matrix. cv::hconcat appends the translation vector to the matrix. The rotation and translation vector can be obtained as already mentioned trough step 1.

Ice_T02 gravatar imageIce_T02 ( 2016-12-07 15:33:10 -0600 )edit

ok Thanks again. As i have the values above (baseline, rx, ry) can i just push those into a rVec/tVec vector?

stillNovice gravatar imagestillNovice ( 2016-12-07 15:46:00 -0600 )edit

for example:

cv::Mat rvec= (cv::Mat1d(1,3) <<  -0.000665997, 0.000927364, 0);
        cv::Mat tvec = (cv::Mat1d(1, 3) << 120, 0, 0);

That should work, correct?

stillNovice gravatar imagestillNovice ( 2016-12-07 15:50:24 -0600 )edit
1

If i am not mistaken, they are the extrinsics parameters for each camera to transform 3D camera coordinates to 3D world coordinates. I am not in my office anymore, so it's hardly possible to tell if they are correct. Atleast from my point of view they look correct. I can give you an example of my rotation and translation vector on friday.

EDIT: Here you can see how do i obtain rotVec and transVec ... Hope this helps!

EDIT 2: Here is an example of a rotation and translation Vector:

RotVec:
[-0.2159457957773362;
 0.6907294013818689;
 -0.2536781396587633]

TransVec:
[-3.826285942706433;
 -25.57936717941958;
 88.37743704035218]
Ice_T02 gravatar imageIce_T02 ( 2016-12-07 15:56:19 -0600 )edit

Hi, Thanks for guidance about triangulation output. Could you please explain what are Coords13D, Coords23D? 3D points respect to the camera1 and camera2 coordinate system? I thought the result of Triangulation should be only one group of points with respect to the world coordinate system...

LM gravatar imageLM ( 2018-07-31 11:43:39 -0600 )edit

In what coordinate frame the the 3D points triangCoords4D computed?

vik748 gravatar imagevik748 ( 2019-02-06 19:33:03 -0600 )edit

To slightly correct the answer : you are supposed to choose one camera as the origin of the coordinate system, meaning one projection matrix is computed as stated, the second one should be the origin, meaning P = K*[I | 0] - it is camera matrix multiplied by identity matrix of size (3,3) and concatenated with zero vector (size (1,3)) - as the camera is the origin, it is not rotated (identity mat) and not translated (zero vector). The result of triangulated points is then in the coordinate frame of the camera you chose as the origin. Additionally - as documentation says, stereoCalibrate() returns rotation and translation (R, T) as "rotation/translation between 1st and 2nd camera coordinate system" so I would choose first camera as the origin.

Porucik Dan gravatar imagePorucik Dan ( 2019-10-29 05:01:11 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-12-05 07:24:03 -0600

Seen: 7,422 times

Last updated: Dec 05 '16