Ask Your Question

Revision history [back]

Hi Hector

what you get from the StereoSGBM algorithm is a disparity map, where the intensity of a pixel is proportional to the inverse scene depth. Hence you cannot simply interpret the pixel intensities as distance values. In fact, you have to take the reciprocal of a pixel value and scale it with a constant that depends on your camera calibration data.

A short description of disparity maps and their projection to a 3D point cloud can e.g. be found in this document on page 11: http://nerian.com/support/documentation/downloads/sp1_manual.pdf

To get meaningful measures from a disparity map, you should look at the OpenCV function reprojectImageTo3D(). The required Q matrix is computed by stereoRectify() when you compute your rectification transformation. The 3d coordinates that you receive will be measured according to the units that you used when specify the object points during camera calibration.

Regards, Steve