Ask Your Question
0

Stereo Map Disparity Values

asked 2019-02-19 07:28:55 -0600

WannabeEngr gravatar image

updated 2019-02-19 11:01:50 -0600

I successfully generated my depth map and printed out some x,y coordinates and its corresponding disparity. Through disparity, we could integrate distance measurement. The problem is, how can I have static disparity values? when I output them, its disparity value fluctuates? Is this normal or i would have some adjustments?

Btw, i am using 2rgb cameras in a live feed

So i click a specific point in the map, (the red circles are the disparity values in an x,y coordinate) As you can see the 4th & 5th circle show different disparity values. It fluctuated from the first 3

str(disp[y,x]), str(filteredImg[y,x] y x coordinate y x in the disparity map

image description

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-02-19 10:15:11 -0600

HYPEREGO gravatar image

updated 2019-02-19 10:22:33 -0600

Not that much information are provided, so I try to be exhaustive as much as I can.

Starting from the beginning, you get the disparity map from the left and right images. This disparity map is static, that means that if you run the disparity map computation over and over with the same L and R images and the same algorithm you'll have always the same disparity map (if you don't change the left and right images, of course!).

What you mean for output, you display the disparity map or in terms of values?

In the first case the disparity map must look always the same if you use the same images and the same algorithm each execution. In that case, if you encounter changes, I'll probably check if there is some randomized things inside your algorithm.

If you mean printing value maybe you're just getting the value in the bad way. Check the matrix type and verify that you access it with the correct data type or you'll get wrong values. For instance, a CV_64F can't be printed as a CV_8U matrix.

edit flag offensive delete link more

Comments

Thankyou for the effort & sorry for giving less information. So i am not using still images, my input are from live video feed from two logitech cameras.

In my observation my rectification was complete okay. No distortion at all so i am posting the code where the disparity map was generated. Note: dispL & dispR are the output from stereo.compute and grayL & grayR are the output from cv2.remap

# Using the WLS filter
filteredImg= wls_filter.filter(dispL,grayL,None,dispR)
filteredImg = cv2.normalize(src=filteredImg, dst=filteredImg, beta=1, alpha=255, 
norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F);
filteredImg = np.uint8(filteredImg)
cv2.imshow('Disparity',filteredImg)

Moreover...

WannabeEngr gravatar imageWannabeEngr ( 2019-02-19 10:50:21 -0600 )edit

Through a def function using mouse click button i can output the x,y coordinate & its disparity values from filteredImg.

def coords_mouse_disp(event,x,y,flags,param): if event == cv2.EVENT_LBUTTONDBLCLK: print (str(x),str(y),str(disp[y,x]),str(filteredImg[y,x]))

Then from python shell, filteredImg[y,x] or the disparity values in a given point. Is not static. (please see attached image above, ill edit my post) thankyou @HYPEREGO

WannabeEngr gravatar imageWannabeEngr ( 2019-02-19 10:56:15 -0600 )edit

The algorithms for stereo match and disparity are quite sensitive to frame-by-frame or camera-to-camera illumination changes and imager noise, especially if there are over/underexposure areas. This may cause unstable ripples, waves, and artifacts along edges. Controlling illumination and using L+R images collected under identical illumination (at same time) may help. Pulising illumination, unsynchronized L+R exposure start, glare/reflection/transparency, rolling shutters can contribute to this.

Also, self-similar pattern matching (especially when the window size is too small) can result in all-out confusion (ex: stereo match disparity of a checkerboard or dot pattern, or low contrast areas of the image - a random projector pattern can help here if contrast is adequate).

opalmirror gravatar imageopalmirror ( 2019-02-19 14:04:17 -0600 )edit

Oh so the light exposure in the environment or the L+R images from my calibration can be the factor? Can it be adjusted through WLS filter or StereoSGBM parameters?

WannabeEngr gravatar imageWannabeEngr ( 2019-02-19 18:35:16 -0600 )edit
2

Since you're using live stream video, it is ok that in a certain point the disparity value change, due to what opalmirror have underlined; algorithm can have some uncertainty, especially SGBM and similar.

Let's try this: run the algorithm using two static images (try Tsukuba on Middlebury dataset, for example) and see if there are changes on values, so you can validate your result and eventually think about how to fix it; since there are only a small difference between the disparity values, maybe just a post-processing will work.

HYPEREGO gravatar imageHYPEREGO ( 2019-02-20 04:33:22 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-02-19 07:28:55 -0600

Seen: 1,737 times

Last updated: Feb 19 '19