Ask Your Question
1

numberOfDisparities clarification (StereoBM)

asked 2017-09-19 12:15:16 -0600

TomPollok gravatar image

updated 2017-09-19 12:51:06 -0600

Hi there,

im using StereoBM to estimate the depth from a stereo camera (parallel camera viewing direction).

I did the usual stuff like undistortion and recfification of the images etc.

I also set the min disparity to 0 and the number of disparities a number that is a multiple of 16.

The images I used have a resolution of FullHD (1920x1080)

In the first image i used the parameters mindisparity=0, numOfdisparities=128 You can see the color coded depth map, where red denotes that the pixels are close and blue means far.

When i increase the numOfdisparities to 448, then i get the middle image, where i have a lot of depth values missing on the left side.

In the bottom image i have set the numOfdisparities value to 800 and almost half of the image does not contain depth information anymore.

Does this make sense, that the numOfdisparities value can have an influence on getting less pixels with disparity values? I thought that the numOfdisparities just influences the search range from starting for each pixel at with the minDisparity offset and a range of numOfdisparities?

Shouldnt that mean that with numOfdisparities i only increase the range where the block matching algorithm can look for possible candidates along the epipolar line?

Edit: You can finde a video here where i play with mindisparities and numberOfDisparities

image description

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2017-09-25 14:33:34 -0600

updated 2017-09-25 19:49:37 -0600

The left camera may see close objects to the left of the right camera view, and the right camera may see close objects to right of the left camera view.

Imagine the space of valid object surfaces at valid disparties as a truncated pyramid. The base of the pyramid is for the distance corresponding to and inversely proportional to minDisparity. The top and bottom of the rectified image views constrain two opposing sides of the pyramid, and the objects visible to both cameras at the left and right of the rectified image views constrain the other two sides of the pyramid. The truncated end of the pyramid is for the distance inversely proportional to minDisparty + numberOfDisparities.

Recall the resulting disparity image is mapped into the view of the left camera rectified image. From the left edge to the right edge of the disparity image, there are regions with different properties. In detail, at any point (xd,y) in the disparity image:

  1. When xd < minDisparity, objects are visible to the left camera but out of view of the right camera, or too distant to be valid. OpenCV does not calculate disparity (it would be geometrically useless).

  2. When xd = minDisparity, objects exactly at the maximum range (corresponding to minDisparity) are visible to both cameras and have a valid disparity, but closer objects are only visible to the left camera (and farther objects are too far to be valid). OpenCV does not calculate disparity (it might be useful though, for some applications).

  3. when xd > minDisparity and xd < minDisparity + numberOfDisparities, you'll get a blend of of 2) and 4). OpenCV does not attempt to calculate disparity (it might be useful though, for some applications).

  4. When xd = minDisparity + numberOfDisparities, objects at all valid ranges are visible to both cameras (although some objects may be too near or too far to be valid). OpenCV calculates disparity and this is the left edge of the valid disparity image pixels.

  5. when xd > minDisparity + numberOfDisparities and xd < width - numberOfDisparities, objects at all valid ranges are visible to both cameras (although some objects may be too near or too far to be valid). OpenCV calculates disparity and this is the bulk of the disparity image.

  6. when xd >= width - numberOfDisparities, objects at all valid ranges are visible to both cameras (although some objects may be too near or too far to be valid). OpenCV calculates disparity, however the getDisparityValidROI interprets this as out-of-view and so any calculation results are clipped and not returned for this region of the disparity image.

I think that 6. is probably in error and should act just like 5. for most applications. I'm considering filing a bug on that.

edit flag offensive delete link more

Comments

I did file a bug on 6, and the fix is merged in 3.3.1 and 2.4.13.5.

opalmirror gravatar imageopalmirror ( 2017-11-14 15:16:09 -0600 )edit
0

answered 2017-09-19 13:34:07 -0600

LBerger gravatar image

updated 2017-09-20 11:16:24 -0600

I think algorithm in stereoBM is

looks for pixel (xr,y) best matches pixel(xl,y) in left image with xl ranging to xr+minDisparity and xr+minDisparity +numOfdisparities.

Hence if [xr+minDisparity ,xr+minDisparity +numOfdisparities] is outside of images don't try to match pixel because you cannot find the best with given constraint.

In stereoBM I think limits are defined and used here

edit flag offensive delete link more

Comments

Thank you for your answer. This sounds like a possible explanation to this, but i havent checked the actual implementation yet. But it is a good hint to look for this.

TomPollok gravatar imageTomPollok ( 2017-09-20 10:40:26 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-09-19 12:15:16 -0600

Seen: 3,319 times

Last updated: Sep 25 '17