# Calculating distance from the pixel value of a disparity map

Hi,

I have been looking for a simple solution for a while, but haven't come across anything so..

I have generated a disparity map using the OpenCV StereoBM and StereoSGBM functions and a pair of cameras. I have all of the camera parameters generated by stereo_calib. Is there a fairly straightforward way to calculate the distance using all of this information given the grayscale value of the pixel?

Thanks

edit retag close merge delete

Sort by » oldest newest most voted
more

I suppose that is possible, but I only need to find the distance of, say, a single pixel, and can ignore the rest of the image. It would be a bit inefficient to calculate the 3D position of all of the image..

There is a good piece of code from: [http://stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv]

The second piece of code in the question, is the code you need to just get one point's XYZ.

more

Hey Lassan,

the last line of the documentation for reprojectImageTo3D states "To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform()" perspectiveTransform takes a 3D Point as src, in your case (x,y,disparity(x,y)) which will then be transformed using the Q matrix obtained with stereoRectify() and the result written into dst.

I haven't tested it myself but something like 1) Push all the points you want to transform into a cv::Mat array 2) cv::perspectiveTransform(your_point_array, output_point_array, Qmat) should do the trick for you.

Hope that helped

Greetings Mathias

more

2

actually, just Scalar transformed = Q * Scalar(x,y,disparity,1); does the same trick for 1 point as reprojectImageTo3D

this seems to return the same matrix for every image. a 4x4 matrix. how to get depth from that?

Official site

GitHub

Wiki

Documentation