Ask Your Question

Odd effect of blockSize in StereoSGBM

asked 2018-06-22 08:26:58 -0500

mfischer-gundlach gravatar image

Hey there,

I am adjusting the parameters of my stereo vision pipeline including the stereo matching in form of StereoSGBM. When it comes to setting the parameter blockSize I experience unexpected results. First of all, my interpretation of the parameter:

In my understanding the SGBM algorithm does optimization along multiple scan lines (that's where the G for global originates from?), i.e. we solve the problem of finding the best disparity value for a pixel and the cost function is constructed along the scan lines.

MOST IMPORTANT QUESTION: Increasing the blockSize mitigates the effect of noise BUT if I find a solution for a small blockSize this should always be a solution for the problem with a larger value for blockSize. However, in my trials it showed that increasing the blockSize does result in less matched pixels, see images (from the top to bottom blockSize are 15, 21 and 27. HOW COMES?

image description (The disparity maps are obtained from images of a calibration board.)

Bonus questions:

Question 1: We use a window(block) to calculate the cost, but we do this for every individual pixel -yes or no? Question 2: Whats the blocks form, is it blockSize-by-blockSize?

Thank you for your help.

edit retag flag offensive close merge delete


In my experience images of a checkerboard have lots of self-similar areas at different scales, with very low contrast/untextured areas in between those points. Both these aspects of checkerboards can easily mislead the block matching algorithm. If a checkerboard is representative of the typical scene you intend to calculate disparity on, this might be hard to solve without projecting a random texture pattern on the checkerboard to reduce the ambiguity and give texture as an aid to the block matcher. If the checkerboard is NOT representative of the target depth subject, then you probably want to measure the performance on the desired target subject instead. Hope these thoughts help.

opalmirror gravatar imageopalmirror ( 2018-06-26 17:46:19 -0500 )edit

Also please study the paper "Stereo Processing by Semiglobal Matching and Mutual Information" by Heiko Hirschmuller. The OpenCV implementation, code in stereosgbm.cpp, is based on this. You'll find most of your answers in the code. If I recall correctly, blocks run from -blockSize to +blockSize offset of the pixel being matched.

opalmirror gravatar imageopalmirror ( 2018-06-26 17:56:02 -0500 )edit

FYI: The block size runs from : int SW2 = SADWindowSize.width/2, SH2 = SADWindowSize.height/2;. I am sifting through the code to find answers to the other questions.

mfischer-gundlach gravatar imagemfischer-gundlach ( 2018-07-06 03:09:15 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2018-07-19 09:16:43 -0500

mfischer-gundlach gravatar image

This problem is a result of the way how the uniqueness filter is implemented. For a match to be unique in the sense of the uniqueness constraint, the cost of matching two pixels needs to be smaller by the uniqueness ratio (which is set as uniquenessRatio) than any other match.

However, due to the initialization of the minimum cost (to MAX) in stereoSGBM

` for( d = 0; d < D; d++ ) { if( Sp[d](100 - uniquenessRatio) < minS100 && std::abs(bestDisp - d) > 1 ) break; }


we may encounter the problem where no other matching cost is smaller by the required margin and thus the initially set disparity propagates and is eventually chosen as the winner.

Now, if we increase the block size, the individual block's cost do not differ by a lot (a block matcher basically applies some smoothing). Thus, we will more likely encounter this case, where no disparity value is significantly smaller than the others and thus the pixel is rendered far away in the scene.

edit flag offensive delete link more


My answer is (party) not correct, but I can't down vote it. Before applying the uniqueness condition, the overall minimum disparity and matching cost is determined as:

for( d = 0; d < D; d++ ) { int Sval = Sp[d]; if( Sval < minS ) { minS = Sval; bestDisp = d; } }

Therefore, the clause testing uniqueness is correct. However, for better readability the condition could be reformulated to

if (Sp[d]*(100 - uniquenessRatio) < minS*100 && bestDisp != d ) .

mfischer-gundlach gravatar imagemfischer-gundlach ( 2018-07-24 03:41:01 -0500 )edit

And since the disp1ptr data is initialized with INVALID_SCALED_DISPARITY, the disparity is unset.

mfischer-gundlach gravatar imagemfischer-gundlach ( 2018-07-24 03:42:48 -0500 )edit
Login/Signup to Answer

Question Tools



Asked: 2018-06-22 08:26:58 -0500

Seen: 114 times

Last updated: Jul 19