Procedure for computing disparity and depth maps

asked 2017-11-28 19:15:33 -0600

malharjajoo gravatar image

updated 2018-01-11 19:26:31 -0600

Hi,

Update: Several people have suggested improving calibration ( reducing rms reprojection error ) for getting a better depth/disparity map but -

1) What should be the rms error range ( approx ) ?

2) I can't find a fixed way of improving calibration even after providing images at different orientations and z-depth. (I randomly get a few "okay" rms error every now and then ) what else can I do ?

//========================================

Question:

I have a question regarding procedure for stereo correspondence ( for computing disparity ( and depth ) images )

Which of the following 2 procedures is supposed to be used ? And why would you prefer one over the other ?

  1. Using SURF/SIFT based detectors/descriptors + DescriptorMatcher ( eg:- FLANN library )
  2. Or using StereoSGBM/ StereoBM classes provided by openCV ?

I am aware of the following -

1) Method 1 will lead to sparse point cloud and

2) Method 2 will lead to dense point cloud.

However I have tried 2) and haven't yet got great results ( even on using a trackbar and fine tuning the parameters )

Thanks!

edit retag flag offensive close merge delete

Comments

I went the SteroBM route to get the disparity. Some sample data and C++ code can be found at: https://github.com/sjhalayka/opencv_d...

The results are... too simple. The disparity Mat doesn't compare to an actual depth map gotten from a single camera system like the Xbox 360 Kinect (https://github.com/sjhalayka/kinect_o...).

I mean, prepare to be disappointed. Maybe the first method that you mention produces better results?

sjhalayka gravatar imagesjhalayka ( 2017-11-28 20:28:02 -0600 )edit
1

1 will give you only sparse, point correspondances (doubtful, if you get anywhere like this),

while 2 will give you "dense" (per pixel) mapping

berak gravatar imageberak ( 2017-11-28 22:11:21 -0600 )edit

@sjhalayka - Yes, I have tried the stereoSGBM, and haven't yet got fantastic results. I assumed it was due to the calibration of my camera ( I have like a reprojection error 0.65 upon calibrating 50 images. I know 0.65 is bad )

malharjajoo gravatar imagemalharjajoo ( 2017-11-28 22:50:09 -0600 )edit

@berak - Yes, you are correct. I have read this somewhere as well, which is why I tried using openCV stereoSGBM/stereoBM but sadly I haven't got great results ( I am not sure if this is due to poor calibration , please see comment above ). Have you ever got a good dense point cloud using openCVs stereoSGBM ?

malharjajoo gravatar imagemalharjajoo ( 2017-11-28 22:53:26 -0600 )edit

stereoSGBM/StereoBM disparity is very sensitive to the calibration - the left and right views must compare epipolar lines.

Smooth depth coverage over the field of view requires smooth image contrast and a clearly visible random texture on smooth surfaces. This may require active illumination or gain adjustment and/or a pattern/textured light projector.

Various disparity parameters need to be tuned for the application, to best select image subject of interest.

With all the above, it's possible to get high-quality dense depth map generation - but it pretty much requires understanding how this happens from the physics/optics to imaging and all the way through the relatively straightforward software.

opalmirror gravatar imageopalmirror ( 2017-12-04 14:13:24 -0600 )edit

@opalmirror, thanks for the information

Before attempting to fix the disparity map, would you have any tips on how to improve calibration ? I have just updated the question with this query as well.

malharjajoo gravatar imagemalharjajoo ( 2018-01-11 19:22:36 -0600 )edit