Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Stereo Matching/Calibration Help

Hello,

I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map generated but very poor point-clouds, that seem skewed/squished and are not representative of the actual scene.

What I have done so far:

Used the OpenCV stereo_calibration and stereo_matching examples to:

1) Calibrate my stereo camera using chess board images
Raw Scene Images: image description
2) Rectified the raw images obtained from the camera
image description
3) Generated a disparity image from the rectified images using Stereo Matching (SGBM)
image description
4) Projected these disparities to a 3D Point Cloud
image description
and
image description

What I have done so far as elimination towards my problem:

  • I have tried the 1st and 2nd images, then the 2nd and 3rd lenses and finally the 1st and 2nd.
  • I've re-run calibration of my chess-board captures by varying the distance (closer/farther away)
  • I have used over 20 stereo pairs for the calibration
  • Varying Chessboard size used: I have used a 9x6 Chessboard image for calibration and now switched to using a 8x5 one instead
  • I've tried using the Block Matching as well as SGBM variants and get
    relatively similar results. Getting
    better results with SGBM so far.
  • I've varied the disparity ranges, changed the SAD Window size etc. with little improvement

What I suspect the problem is:

My disparity image looks relatively acceptable, but the next step is to go to 3D Point cloud using the Q matrix. I suspect, I am not calibrating the cameras correctly to generate the right Q matrix. Unfortunately, I've hit the wall in terms of thinking what else I can do to get a better Q matrix. Can someone please suggest ways ahead?

PS: The reason I chose to upload these images are that the scene has some texture, so I was anticipating a reply saying the scene is too homogenous. The cover on the partition and the chair as well are quite rich in terms of texture.

Few Questions:

Can you help me remove the image/disparity plane that seems to be part of the point cloud? Why is this happening?

Is there something obvious I am doing incorrectly? I would post my code, but it is extremely similar to the OpenCV examples provided and I do not think I'm doing anything more creatively. I can if there is a specific section that might be concerning.

In my naive opinion, it seems that the disparity image is OK. But the point cloud is definitely nothing I would have expected from a relatively decent disparity image, it is WAY worse.

If it helps, I've mentioned the Q matrix I obtain after camera calibration, incase something obvious jumps out. Comparing this to the Learning OpenCV book, I don't think there is anything blatantly incorrect ...

Q: rows: 4
   cols: 4
   data: [ 1., 0., 0., -5.9767076110839844e+002, 0., 1., 0.,
       -5.0785438156127930e+002, 0., 0., 0., 6.8683948509213735e+002, 0.,
       0., -4.4965180874519222e+000, 0. ]

Thanks for reading and I'll honestly appreciate any suggestions at this point ...

Stereo Matching/Calibration Help

Hello,

I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map generated but very poor point-clouds, that seem skewed/squished and are not representative of the actual scene.

What I have done so far:

Used the OpenCV stereo_calibration and stereo_matching examples to:

1) Calibrate my stereo camera using chess board images
Raw Scene Images: image description
2) Rectified the raw images obtained from the camera using the matrices after camera calibration
image description
3) Generated a disparity image from the rectified images using Stereo Matching (SGBM)
image description
4) Projected these disparities to a 3D Point Cloud
image description
and
image description

What I have done so far as elimination towards my problem:

  • I have tried the 1st and 2nd images, then the 2nd and 3rd lenses and finally the 1st and 2nd.
  • I've re-run calibration of my chess-board captures by varying the distance (closer/farther away)
  • I have used over 20 stereo pairs for the calibration
  • Varying Chessboard size used: I have used a 9x6 Chessboard image for calibration and now switched to using a 8x5 one instead
  • I've tried using the Block Matching as well as SGBM variants and get
    relatively similar results. Getting
    better results with SGBM so far.
  • I've varied the disparity ranges, changed the SAD Window size etc. with little improvement

What I suspect the problem is:

My disparity image looks relatively acceptable, but the next step is to go to 3D Point cloud using the Q matrix. I suspect, I am not calibrating the cameras correctly to generate the right Q matrix. Unfortunately, I've hit the wall in terms of thinking what else I can do to get a better Q matrix. Can someone please suggest ways ahead?

PS: The reason I chose to upload these images are that the scene has some texture, so I was anticipating a reply saying the scene is too homogenous. The cover on the partition and the chair as well are quite rich in terms of texture.

Few Questions:

Can you help me remove the image/disparity plane that seems to be part of the point cloud? Why is this happening?

Is there something obvious I am doing incorrectly? I would post my code, but it is extremely similar to the OpenCV examples provided and I do not think I'm doing anything more creatively. I can if there is a specific section that might be concerning.

In my naive opinion, it seems that the disparity image is OK. But the point cloud is definitely nothing I would have expected from a relatively decent disparity image, it is WAY worse.

If it helps, I've mentioned the Q matrix I obtain after camera calibration, incase something obvious jumps out. Comparing this to the Learning OpenCV book, I don't think there is anything blatantly incorrect ...

Q: rows: 4
   cols: 4
   data: [ 1., 0., 0., -5.9767076110839844e+002, 0., 1., 0.,
       -5.0785438156127930e+002, 0., 0., 0., 6.8683948509213735e+002, 0.,
       0., -4.4965180874519222e+000, 0. ]

Thanks for reading and I'll honestly appreciate any suggestions at this point ...

click to hide/show revision 3
sterepCalibrate function update

Stereo Matching/Calibration Help

Hello,

I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map generated but very poor point-clouds, that seem skewed/squished and are not representative of the actual scene.

What I have done so far:

Used the OpenCV stereo_calibration and stereo_matching examples to:

1) Calibrate my stereo camera using chess board images
Raw Scene Images: image description
2) Rectified the raw images obtained from the camera using the matrices after camera calibration
image description
3) Generated a disparity image from the rectified images using Stereo Matching (SGBM)
image description
4) Projected these disparities to a 3D Point Cloud
image description
and
image description

What I have done so far as elimination towards my problem:

  • I have tried the 1st and 2nd images, then the 2nd and 3rd lenses and finally the 1st and 2nd.
  • I've re-run calibration of my chess-board captures by varying the distance (closer/farther away)
  • I have used over 20 stereo pairs for the calibration
  • Varying Chessboard size used: I have used a 9x6 Chessboard image for calibration and now switched to using a 8x5 one instead
  • I've tried using the Block Matching as well as SGBM variants and get
    relatively similar results. Getting
    better results with SGBM so far.
  • I've varied the disparity ranges, changed the SAD Window size etc. with little improvement

What I suspect the problem is:

My disparity image looks relatively acceptable, but the next step is to go to 3D Point cloud using the Q matrix. I suspect, I am not calibrating the cameras correctly to generate the right Q matrix. Unfortunately, I've hit the wall in terms of thinking what else I can do to get a better Q matrix. Can someone please suggest ways ahead?

The other thing that I think might be problematic is the assumptions I am making when using the cv::stereoCalibrate function. For the moment, I individually calibrate each camera to get the camera and distortion (cameraMatrix[0], distCoeffs[0] and cameraMatrix[1], distCoeffs[1]) matrices so it makes the complexity for the stereoCalibrate function a little easier.

stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
                    cameraMatrix[0], distCoeffs[0],
                    cameraMatrix[1], distCoeffs[1],
                    imageSize, R, T, E, F,
                    TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
                    //CV_CALIB_FIX_ASPECT_RATIO +
                    //CV_CALIB_ZERO_TANGENT_DIST +
                    //CV_CALIB_SAME_FOCAL_LENGTH +
                    CV_CALIB_RATIONAL_MODEL 
                    //CV_CALIB_FIX_K3 + CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5
                    );

PS: The reason I chose to upload these images are that the scene has some texture, so I was anticipating a reply saying the scene is too homogenous. The cover on the partition and the chair as well are quite rich in terms of texture.

Few Questions:

Can you help me remove the image/disparity plane that seems to be part of the point cloud? Why is this happening?

Is there something obvious I am doing incorrectly? I would post my code, but it is extremely similar to the OpenCV examples provided and I do not think I'm doing anything more creatively. I can if there is a specific section that might be concerning.

In my naive opinion, it seems that the disparity image is OK. But the point cloud is definitely nothing I would have expected from a relatively decent disparity image, it is WAY worse.

If it helps, I've mentioned the Q matrix I obtain after camera calibration, incase something obvious jumps out. Comparing this to the Learning OpenCV book, I don't think there is anything blatantly incorrect ...

Q: rows: 4
   cols: 4
   data: [ 1., 0., 0., -5.9767076110839844e+002, 0., 1., 0.,
       -5.0785438156127930e+002, 0., 0., 0., 6.8683948509213735e+002, 0.,
       0., -4.4965180874519222e+000, 0. ]

Thanks for reading and I'll honestly appreciate any suggestions at this point ...

click to hide/show revision 4
Added code for disparity to 3D point cloud

Stereo Matching/Calibration Help

Hello,

I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map generated but very poor point-clouds, that seem skewed/squished and are not representative of the actual scene.

What I have done so far:

Used the OpenCV stereo_calibration and stereo_matching examples to:

1) Calibrate my stereo camera using chess board images
Raw Scene Images: image description
2) Rectified the raw images obtained from the camera using the matrices after camera calibration
image description
3) Generated a disparity image from the rectified images using Stereo Matching (SGBM)
image description
4) Projected these disparities to a 3D Point Cloud
image description
and
image description

What I have done so far as elimination towards my problem:

  • I have tried the 1st and 2nd images, then the 2nd and 3rd lenses and finally the 1st and 2nd.
  • I've re-run calibration of my chess-board captures by varying the distance (closer/farther away)
  • I have used over 20 stereo pairs for the calibration
  • Varying Chessboard size used: I have used a 9x6 Chessboard image for calibration and now switched to using a 8x5 one instead
  • I've tried using the Block Matching as well as SGBM variants and get
    relatively similar results. Getting
    better results with SGBM so far.
  • I've varied the disparity ranges, changed the SAD Window size etc. with little improvement

What I suspect the problem is:

My disparity image looks relatively acceptable, but the next step is to go to 3D Point cloud using the Q matrix. I suspect, I am not calibrating the cameras correctly to generate the right Q matrix. Unfortunately, I've hit the wall in terms of thinking what else I can do to get a better Q matrix. Can someone please suggest ways ahead?

The other thing that I think might be problematic is the assumptions I am making when using the cv::stereoCalibrate function. For the moment, I individually calibrate each camera to get the camera and distortion (cameraMatrix[0], distCoeffs[0] and cameraMatrix[1], distCoeffs[1]) matrices so it makes the complexity for the stereoCalibrate function a little easier.

stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
                    cameraMatrix[0], distCoeffs[0],
                    cameraMatrix[1], distCoeffs[1],
                    imageSize, R, T, E, F,
                    TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
                    //CV_CALIB_FIX_ASPECT_RATIO +
                    //CV_CALIB_ZERO_TANGENT_DIST +
                    //CV_CALIB_SAME_FOCAL_LENGTH +
                    CV_CALIB_RATIONAL_MODEL 
                    //CV_CALIB_FIX_K3 + CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5
                    );

Additionally, I think it might be useful to mention how I am going from disparity to point cloud. I am using OpenCV's cv::reprojectImageTo3D and then writing the data to a PCL Point cloud structure. Here is the relevant code:

cv::reprojectImageTo3D( imgDisparity16S, reconstructed3D, Q, false, CV_32F);
  for (int i = 0; i < reconstructed3D.rows; i++)
  {
    for (int j = 0; j < reconstructed3D.cols; j++)
    {
        cv::Point3f cvPoint = reconstructed3D.at<cv::Point3f>(i, j);  
            //Filling in a PCL structure
            pcl::PointXYZRGB point;
            point.x = cvPoint.x;
            point.y = cvPoint.y;
            point.z = cvPoint.z;
            point.rgb = rectified_imgRight.at<cv::Vec3b>(i,j)[0]; //Grey information

            point_cloud_ptr->points.push_back (point);
    }
  }

  point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
  point_cloud_ptr->height = 1;
  pcl::io::savePCDFileASCII("OpenCV-PointCloud.pts", *point_cloud_ptr);

PS: The reason I chose to upload these images are that the scene has some texture, so I was anticipating a reply saying the scene is too homogenous. The cover on the partition and the chair as well are quite rich in terms of texture.

Few Questions:

Can you help me remove the image/disparity plane that seems to be part of the point cloud? Why is this happening?

Is there something obvious I am doing incorrectly? I would post my code, but it is extremely similar to the OpenCV examples provided and I do not think I'm doing anything more creatively. I can if there is a specific section that might be concerning.

In my naive opinion, it seems that the disparity image is OK. But the point cloud is definitely nothing I would have expected from a relatively decent disparity image, it is WAY worse.

If it helps, I've mentioned the Q matrix I obtain after camera calibration, incase something obvious jumps out. Comparing this to the Learning OpenCV book, I don't think there is anything blatantly incorrect ...

Q: rows: 4
   cols: 4
   data: [ 1., 0., 0., -5.9767076110839844e+002, 0., 1., 0.,
       -5.0785438156127930e+002, 0., 0., 0., 6.8683948509213735e+002, 0.,
       0., -4.4965180874519222e+000, 0. ]

Thanks for reading and I'll honestly appreciate any suggestions at this point ...