Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

openCV triangulatePoints() and stereoRectify() relation ?

Hi,

I am currently trying to triangulate points from 2 images obtained by a stereo camera ( 2 cameras ) setup.

I have already done -

  1. camera calibration for intrinsic parameters for each camera

  2. then found their relative pose using the stereoCalibrate() function ( Output is R, and T vectors between the two cameras).

  3. From what I understand, I need to now use stereoRectify() to simplify the stereoCorrespondence problem later during triangulation. The triangulation is not working hence I think the issue is during stereoRectify( ).

  4. On getting new input image, I detect corners, then use cv::undistortPoints( ) on them.

  5. Finally I try t triangulate but this is not working.

The relevant part of my code is ( I am detecting corners using cv::findChessBoardCorners( ) )-

cv::stereoRectify(cameraMatrix1, distCoeffs1,
             cameraMatrix2, distCoeffs2,
              imageSize, R, T, R1, R2, P1, P2, Q,
              CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

cv::triangulatePoints(P1,P2,undistortedCorners1,undistortedCorners2,pnts4D);

However, can someone please explain the output of stereoRectify( ) ? The output has P1 and P2 separately, and the documentation refers to them as Projection Matrix for each camera. However what I don't understand -

1) Why are these called projection matrices when they dont have rotation elements contained in it ? It seems more like new camera matrix.

2) The output also contains R1 and R2. Why isn't this needed by cv::triangulatePoints( ) ?

3) Am I supposed to pass P1 and P2 directly to cv::triangulatePoints() ? Or do I need to multiply it with R1/R2 like projMatrix1 = R1 * P1 projMatrix2 = R2 * P2

and then pass this to cv::triangulatePoints() ?

openCV triangulatePoints() and stereoRectify() relation ?

Hi,

I am currently trying to triangulate points from 2 images obtained by a stereo camera ( 2 cameras ) setup.

I have already done done -

  1. camera calibration for intrinsic parameters for each camera

  2. then found their relative pose using the stereoCalibrate() function ( Output is R, and T vectors between the two cameras).

  3. From what I understand, I need to now use stereoRectify() to simplify the stereoCorrespondence problem later during triangulation. The triangulation is not working hence I think the issue is during stereoRectify( ).

  4. On getting new input image, I detect corners, then use cv::undistortPoints( ) on them.

  5. Finally I try t triangulate but this is not working.

The relevant part of my code is ( I am detecting corners using cv::findChessBoardCorners( ) )-

cv::stereoRectify(cameraMatrix1, distCoeffs1,
             cameraMatrix2, distCoeffs2,
              imageSize, R, T, R1, R2, P1, P2, Q,
              CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

cv::triangulatePoints(P1,P2,undistortedCorners1,undistortedCorners2,pnts4D);

However, can someone please explain the output of stereoRectify( ) ? The output has P1 and P2 separately, and the documentation refers to them as Projection Matrix for each camera. However what I don't understand -

1) Why are these called projection matrices when they dont have rotation elements contained in it ? It seems more like new camera matrix.

2) The output also contains R1 and R2. Why isn't this needed by cv::triangulatePoints( ) ?

3) Am I supposed to pass P1 and P2 directly to cv::triangulatePoints() ? Or do I need to multiply it with R1/R2 like projMatrix1 = R1 * P1 projMatrix2 = R2 * P2

and then pass this to cv::triangulatePoints() ?

openCV triangulatePoints() and stereoRectify() relation ?

Hi,

I am currently trying to triangulate points from 2 images obtained by a stereo camera ( 2 cameras ) setup.

I have already done -

  1. camera calibration for intrinsic parameters for each camera

  2. then found their relative pose using the stereoCalibrate() function ( Output is R, and T vectors between the two cameras).

  3. From what I understand, I need to now use stereoRectify() to simplify the stereoCorrespondence problem later during triangulation. The triangulation is not working hence I think the issue is during stereoRectify( ).

  4. On getting new input image, I detect corners, then use cv::undistortPoints( ) on them.

  5. Finally I try t triangulate but this is not working.

The relevant part of my code is ( I am detecting corners using cv::findChessBoardCorners( ) )-

cv::stereoRectify(cameraMatrix1, distCoeffs1,
             cameraMatrix2, distCoeffs2,
              imageSize, R, T, R1, R2, P1, P2, Q,
              CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

cv::triangulatePoints(P1,P2,undistortedCorners1,undistortedCorners2,pnts4D);

However, can someone please explain the output of stereoRectify( ) ? The output has P1 and P2 separately, and the documentation refers to them as Projection Matrix for each camera. However what I don't understand -

1) Why are these called projection matrices when they dont have rotation elements contained in it ? It seems more like new camera matrix.

2) The output also contains R1 and R2. Why isn't this needed by cv::triangulatePoints( ) ?

3) Am I supposed to pass P1 and P2 directly to cv::triangulatePoints() ? Or do I need to multiply it with R1/R2 like projMatrix1 = R1 * P1 projMatrix2 = R2 * P2

and then pass this to cv::triangulatePoints() ?

openCV triangulatePoints() and stereoRectify() relation ?

Hi,

I am currently trying to triangulate points from 2 images obtained by a stereo camera ( 2 cameras ) setup.

I have already done -

  1. camera calibration for intrinsic parameters for each camera

  2. then found their relative pose using the stereoCalibrate() function ( Output is R, and T vectors between the two cameras).

  3. From what I understand, I need to now use stereoRectify() to simplify the stereoCorrespondence problem later during triangulation. The triangulation is not working hence I think the issue is during stereoRectify( ).

  4. On getting new input image, I detect corners, then use cv::undistortPoints( ) on them.

  5. Finally I try t triangulate but this is not working.

The relevant part of my code is ( I am detecting corners using cv::findChessBoardCorners( ) )-

cv::stereoRectify(cameraMatrix1, distCoeffs1,
             cameraMatrix2, distCoeffs2,
              imageSize, R, T, R1, R2, P1, P2, Q,
              CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

cv::triangulatePoints(P1,P2,undistortedCorners1,undistortedCorners2,pnts4D);

However, can someone please explain the output of stereoRectify( ) ? The output has P1 and P2 separately, and the documentation refers to them as Projection Matrix for each camera. However what I don't understand -

1) Why are these called projection matrices when they dont have rotation elements elements contained in it ? It seems more like new camera matrix.

2) The output also contains R1 and R2. Why isn't this needed by cv::triangulatePoints( ) ?

3) Am I supposed to pass P1 and P2 directly to cv::triangulatePoints() cv::triangulatePoints() ? Or do I need to multiply it with R1/R2 like projMatrix1 = R1 * P1 projMatrix2 = R2 * P2

and then pass this to cv::triangulatePoints() ?

I would be really grateful if someone can help out with this!

Thanks !!