Ask Your Question

Computing disparity map is very SLOW!

asked 2017-02-22 09:20:56 -0600

mirnyy gravatar image

updated 2017-02-22 09:21:33 -0600

I compute with StereoSGBM a disparity map with 2 images of the size of 5472x3648, but it takes very long time to process (up to 10 min).
Is there any possibility to speed this up?

edit retag flag offensive close merge delete


Smaller images? For "real-time", resolution is usually VGA or QVGA.

Der Luftmensch gravatar imageDer Luftmensch ( 2017-02-22 10:15:55 -0600 )edit

I also did the same, lower resolution images around 600x400 (approx cont remember) also posted (Vlog) its not very clear map but works for realtime

Codacus gravatar imageCodacus ( 2017-02-22 12:10:11 -0600 )edit

1 answer

Sort by » oldest newest most voted

answered 2017-09-25 19:48:18 -0600

There are several typical steps to try to speed up disparity map calculation:

  1. resize/scale left and right images to the smallest usable/meaningful dimensions.
  2. use the smallest useful correlation window size.
  3. use the smallest useful numberOfDisparities - trading off minDisparities here may help you select a 'depth of field' of interest.
  4. If you can make StereoBM work for your application or the CUDA versions of StereoBM, they will be be at least an order of magnitude faster than StereoSGBM.
edit flag offensive delete link more


Thank you very much for your answer.

To 1.
I try to get a 3D reconstruction with the highest resolution, so resizing the images is not a suitable option in this case.

To 2. and 3.
I already tried different parameters and figured out the optimums.

To 4.
I already tried the StereoBM (and the CUDA version is indeed very fast!) but unfortunately it is giving very bad results (in comparison of StereoSGBM).

Do you think the StereoSGBM is as well implementable on CUDA (with parallel computation) ?

mirnyy gravatar imagemirnyy ( 2017-09-28 11:56:53 -0600 )edit

mirnyy, happy to add my thoughts. It sounds like you have a challenging set of requirements.

High spatial resolution across the entire image necessitates high computation requirements. It is hard to imagine otherwise. Scaling x and y also scales number of disparities and window size by approximately the same factor. Disparity effort scales roughly with x * y * (num_disp+wsz), or O(scale^3)

If any objects needing detailed examination are a small part of the overall scene, then it may be possible to use a couple passes: first, an overall disparity map at low resolution to get the overall scene. Then, a second disparity map, over a small region of interest, to resolve fine detail.

I'm not a CUDA expert so can't comment.

opalmirror gravatar imageopalmirror ( 2017-09-28 13:27:35 -0600 )edit

Question Tools



Asked: 2017-02-22 09:20:56 -0600

Seen: 1,279 times

Last updated: Feb 22 '17