Ask Your Question
0

Matching: Split x and y disparity

asked 2018-01-25 03:55:03 -0600

__Alex__ gravatar image

Hello everyone,

I have a 3D image and I want to mark some pixels on the left image (to outline an object) and then find it programmatically on the right image. So I thought about creating a disparity map and just adding the corresponding values to my pixels' coordinates.

So far, I ran the SBM_Sample.cpp and it works fine, but I need the disparity to be separated in x and y directions. Is there an algorithm that does this for me?

Cheers

Alex

edit retag flag offensive close merge delete

Comments

1

the disparity is a 1 channel image. what do you expect to "split" there, exactly ?

berak gravatar imageberak ( 2018-01-25 05:33:13 -0600 )edit

Disparity should be calculated in x and y direction, because cameras aren't guaranteed to be aligned with pixel precision, so the y coordinates of any object might be different in both images. The OpenCV disparity is just the difference in x, right? Or is it sqrt(x²+y²)?

__Alex__ gravatar image__Alex__ ( 2018-01-25 06:54:21 -0600 )edit

1 answer

Sort by » oldest newest most voted
1

answered 2018-01-30 14:42:57 -0600

updated 2018-01-30 14:44:30 -0600

The disparity algorithms in OpenCV only look for matches of a small square window patch in the left and right image, with only the x value differing. For this to be effective, each stereo camera has been previously calibrated for distortion, rotation, translation and then the images from each camera are rectified (simple matrix multiplication) before the disparity match attempt.

A disparity match algorithm could search y as well as x offset between left and right images. If uncalibrated image views are coming in, then significant rotation, focal length, or distortion differences between the cameras also have to be searched. This is computationally very expensive, and senseless, when calibration allows rectification to effectively cancel all this extra repeated work.

edit flag offensive delete link more

Comments

2

Thanks for shedding some light. During debugging and testing though I prefer to have the option available to check vertical disparities, too, to facilitate finding rectification errors. An auto-calibration feature might need something like that as well, but maybe that needs a different algorithm entirely, like surf.

__Alex__ gravatar image__Alex__ ( 2018-01-31 02:58:50 -0600 )edit
2

Som calibration algorithms possible using planar targets with easily recognized points... e.g. checkerboard or overlapping circles, on a movable plane. The targets need to move the points through most of the in-focus volume of the cameras. Several hundred data points are necessary for sub-pixel accurate models of the camera distortion and relative poise. Other schemes are possible. I recently saw a video where individually addressable christmas tree lights were sequentially illuminated and the resulting points correlated, well enough to generate point clouds and to program the light string to make rough spatial moving light patterns without conventional calibration targets.

opalmirror gravatar imageopalmirror ( 2018-02-01 13:33:14 -0600 )edit
1

I see. If people go that far, I can see why they're able to eradicate vertical disparity. Thanks for explaining!

__Alex__ gravatar image__Alex__ ( 2018-02-02 09:41:42 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-01-25 03:55:03 -0600

Seen: 478 times

Last updated: Jan 25 '18