Ask Your Question
0

Stereo to point cloud

asked 2015-01-30 07:56:43 -0600

rcarrasco gravatar image

Hi, I have a calibrated stereo camera with ROS package camera_calibration, and I would like convert images to point cloud.

I'm going to do this with opencv, with the function reprojectImageTo3D, but I need the Q matrix.

I found another function to calculate the Q matrix but I don't know some parameters:

My calibration data:

Right camera:

image_width: 640
image_height: 480
camera_name: 00b09d0100a8cd50
camera_matrix:
  rows: 3
  cols: 3
  data: [618.406355982753, 0, 328.938909311292, 0, 622.618018605044, 192.823847411511, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [-0.46935852500581, 0.495020825022316, 0.0069132704108347, 0.000957659601339344, 0]
rectification_matrix:
rows: 3
cols: 3
data: [0.987794979792204, 0.000360084570829607, -0.15575926372586, -0.00147162495744743, 0.999974269489065, -0.00702101700582399, 0.155752727800526, 0.00716454457124124, 0.987770093232116]
projection_matrix:
rows: 3
cols: 4
data: [2536.3424975867, 0, 419.49533700943, 112.802583009304, 0, 2536.3424975867, 207.939946174622, 0, 0, 0, 1, 0]

Left camera:

image_width: 640
image_height: 480
camera_name: 00b09d01009b8da3
camera_matrix:
  rows: 3
  cols: 3
  data: [1303.84092827502, 0, 305.740018425863, 0, 1314.11682894773, 177.738196152231, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
   rows: 1
   cols: 5
   data: [-0.151635456291086, 0.3629641480367, -0.00784184518383463, -0.000413869770118943, 0]
rectification_matrix:
   rows: 3
   cols: 3
   data: [0.98618892981478, -0.00296598739861576, -0.165597698140795, 0.0041481493735406, 0.999968321024829, 0.006793364671487, 0.165572303141417, -0.0073864650230178, 0.986169991718928]
projection_matrix:
 rows: 3
 cols: 4
 data: [2536.3424975867, 0, 419.49533700943, 0, 0, 2536.3424975867, 207.939946174622, 0, 0, 0, 1, 0]

How I obtain the R and T?

stereoRectify(InputArray cameraMatrix1, => camera_matrix_left
              InputArray distCoeffs1, => distortion_coefficients_left
              InputArray cameraMatrix2, => camera_matrix_right
              InputArray distCoeffs2, => distortion_coefficients_right
              Size imageSize, => size of the images
              InputArray R, => ??????
              InputArray T => ??????
              OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q)
edit retag flag offensive close merge delete

Comments

Hi, I think the easiest thing is to redo the calibration using stereoCalibrate to get the matrix R and T. Here, you seem to have the intrinsics parameters for each camera but nothing that can tell you how the second camera is related to the first or vice versa.

Eduardo gravatar imageEduardo ( 2015-01-30 12:19:14 -0600 )edit

If you know your camera baseline and FOV, you can easily compute x,y,z without Q.

Der Luftmensch gravatar imageDer Luftmensch ( 2015-01-30 22:08:12 -0600 )edit

I can get the camera baseline and FOV, but I don't know how to compute x,y,z. Can you help me?

rcarrasco gravatar imagercarrasco ( 2015-02-02 02:00:57 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-02-15 09:37:14 -0600

const double hFOV = 62.85 * (CV_PI / 180.0);
const double vFOV = 47.13 * (CV_PI / 180.0);
const double baseline = 3.0;
const double depthConst = baseline / (2.0*tan(hFOV/2.0));
const double depthFactorW = 2.0 * tan(hFOV/2.0);
const double depthFactorH = 2.0 * tan(vFOV/2.0);

Mat zVals = Mat::zeros(sizeIn, CV_32FC1);
for(int i = 0; i < sizeIn.height; i++) {
    for(int j = 0; j < sizeIn.width; j++) {
        if (disparityFrame.disparity16.at<short>(i,j) > 0) {

            float z = (float) -(((float) depthConst) * (((float) sizeIn.width) / ((float) (disparityFrame.disparity16.at<short>(i,j)))));
            z = z * 16.0f;

            zVals.at<float>(i,j) = z;           

            //fullDepthImage.at<Vec3f>(i,j)[2] = z;
            //fullDepthImage.at<Vec3f>(i,j)[0] = -((j - (sizeIn.width  / 2.0))/sizeIn.width)  * depthFactorW * z;
            //fullDepthImage.at<Vec3f>(i,j)[1] =  ((i - (sizeIn.height / 2.0))/sizeIn.height) * depthFactorH * z;
        }               
    }
}

This assumes that your image center is the optical center and that your disparity values are scaled by 16. But you could adjust accordingly.The units I used were were cm.

edit flag offensive delete link more

Comments

And this is easily derived, just draw some pictures to help you. Remember that you need the calibrated FOV.

Der Luftmensch gravatar imageDer Luftmensch ( 2015-02-15 09:38:25 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-01-30 07:56:43 -0600

Seen: 620 times

Last updated: Feb 15 '15