Large Baseline Stereo Causing Issues

asked 2018-03-07 19:52:03 -0600

timeforscience gravatar image

updated 2018-03-08 19:54:20 -0600

I'm attempting to perform a stereo calculation on two cameras with a pretty wide baseline with bad results. I can't seem to find parameters for the block matching that yield good results. Everything looks noisy and pretty terrible. I feel like I've tried everything and could use some advice for what else to try. Parameters I'm currently using are as follows:

cv::Ptr<cv::StereoBM> bm = cv::StereoBM::create(16,1);
bm->setPreFilterCap(4);
bm->setPreFilterSize(5);
bm->setPreFilterType(1);
bm->setBlockSize(7);
bm->setMinDisparity(-103);
bm->setNumDisparities(656);
bm->setTextureThreshold(104);
bm->setUniquenessRatio(0);
bm->setSpeckleWindowSize(0);
bm->setSpeckleRange(0);
bm->setDisp12MaxDiff(111);
cv::Mat disp;

Attached are some calibrated test images.

right image

left image

edit: the images aren't showing up so here are the URLS: https://imgur.com/1RW4Rnr and https://imgur.com/iwvb83L

edit 2: I've done some more tests and found that with a very close baseline (< 10mm) I can get sane results, but the accuracy is low. I'd rather a large baseline which should yield better accuracy, but the output just looks like noise.

edit 3: to make it easier to see what I'm talking about, attached are images of the disparity result and the disparity converted to a point cloud in PCL https://imgur.com/a/qXLL0 and https://imgur.com/weSYTDg

edit retag flag offensive close merge delete

Comments

Those images have a single small object and a background with zero texture. You should expect a ton of noise unless you set uniqueness and texture thresholds appropriately. Are you sure that your cameras are rigidly attached? If there is any flex or movement, the calibration will not be useful. The algorithm is a basic local stereo algorithm, you should understand how it works before blindly seeking magic parameter values.

Der Luftmensch gravatar imageDer Luftmensch ( 2018-03-08 18:41:04 -0600 )edit

I'm not blindly seeking magical parameter values. Ill explain my decisions if that helps:

  • Pre filter cap and size values remain low as high values will wash out features on the forefront object
  • Block size remains low for the same reason
  • The minimum disparity is negative as the cameras are convergent, not parallel.
  • The texture threshold is high due to background texture and effectively eliminates the noise from the black background while preserving the part
  • Uniqueness above 0 eliminates all structure
  • Disp12 is set high to fill in missing gaps

The cameras are rigidly attached to 80/20 frame. I've gotten sane values with small baseline, but not this large one for some reason and I don't understand why a large baseline yields absolutely no structure, just noise.

timeforscience gravatar imagetimeforscience ( 2018-03-08 19:35:45 -0600 )edit

What is your image size? 103 + 656 = 759 disparity values to choose from, which is huge. I am guessing speed is not a concern? With a small support region (7x7) and a large image, accuracy and uniqueness will be unlikely. Try reducing your images to something more typical, like VGA or even QVGA. Then resize, scale, and do some guided filtering. And while you're at it, give SGBM a try, it is nearly as fast and gives much better results. Also, consider what your 3D error requirements are and then determine what baseline and image resolution are required given the location of the object.

Der Luftmensch gravatar imageDer Luftmensch ( 2018-03-08 20:06:52 -0600 )edit

Thank you for your guidance! The image is indeed very large and speed isn't an issue, only accuracy which is why I've been focusing on using a setup with a large baseline. I've finally had some results though! Using SGBM, downsampling the image by 50%, and shortening the baseline to as close as I can get the cameras, I've managed to actually resolve the part. No fine details yet, but it's a good start.
It's very odd, but it seems that for a certain baseline any objects closer than a set distance simply cannot be resolved. I'll do more research to see what I can find.

timeforscience gravatar imagetimeforscience ( 2018-03-09 13:32:38 -0600 )edit

I am seeing features in the left and right images which should be on the same line in both images, but are on a higher line in the right image than the left. So, I need to ask: 1) are these are raw images which have yet to be rectified? Also: 2) did you follow a proper calibration procedure. Finally: 3) please post the correctly rectified images that are the input to the stereo matching step.

Without proper rectification of the raw images to sub-pixel accuracy into rectified images - this corrects lens-related distortion and any common mode pointing rotation and translation issues - then the stereo match algorithm will not produce quality or meaningful results.

opalmirror gravatar imageopalmirror ( 2018-04-04 17:25:03 -0600 )edit