# rectify fisheye stereo setup

Thomas
21 2

Hey all,

I'm trying to use the GoPro Hero2 stereo setup (the cameras use fisheye lenses with a 170 degree field of view) and finished calibrating the cameras using the caltech toolbox (using 5 distortion parameters). My problem is that OpenCV somehow doesn't crop the valid region in both images during rectification but rather includes wrapped regions in the corners and blank regions at the borders (see http://i.imgur.com/cWVnD.jpg for an example of the rectification result).

The problem associated with this is that the region of interest returned by stereoRectify is of size (0,0) even though it's supposed to mark the region of valid pixels - or at least some region, but certainly not nothing? At the moment I'm using an alpha value of -1 as this provides the most acceptable output so far. When I set alpha to 0, I only receive valid pixels, but the images aren't properly rectified any longer (i.e. the optical image centers aren't at the same y-coordinate .. I also tried setting cv::CALIBZERODISPARITY but that didn't fix this problem either).

I've also tried sending the raw images + calibration parameters through the ROS pipeline and have the stereoimageproc node do the rectification, but that led to the same result.

This is the code I'm using to rectify my images:

// <load calibration parameters etc. >

int frame = 0;
cv::Mat mapx, mapy;
while(true)
{

cv::Mat img;
break;

if(frame == 0)
{
//build undistortion + rectification map
cv::Mat rect1, rect2, proj1, proj2;
cv::Mat Q; // disparity-to-depth mapping matrix
double alpha = -1;
cv::Size imgSize(img.cols, img.rows);
cv::Size newImgSize = imgSize;
cv::Rect roi_left, roi_right;
cv::stereoRectify(M_left, D_left, M_right, D_right, imgSize, R, T, rect1, rect2, proj1, proj2,
Q, 0/*cv::CALIB_ZERO_DISPARITY*/, alpha, newImgSize, &roi_left, &roi_right);

if(useLeft)
cv::initUndistortRectifyMap(M_left, D_left, rect1, proj1, newImgSize, CV_16SC2, mapx, mapy);
else
cv::initUndistortRectifyMap(M_right, D_right, rect2, proj2, newImgSize, CV_16SC2, mapx, mapy);

}

//undistort and rectify image
cv::Mat imgRect;
cv::remap(img, imgRect, mapx, mapy, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar(0));

// < do stuff >
}


EDIT: I've started trying to debug the opencv code to figure out what's going on and here is an image showing the inner (green) and outer (red) rectangles which are calculated inside stereoRectify using the icvGetRectangles function to get the regions of interest (which is the intersection between inner and outer rectangle): http://i.imgur.com/tUKHe.png
Other than those regions (or especially the inner one) not making a lot of sense the problem also is that the inner rectangle always has a negative width: rect inner: offset:(1331.3,268.081), size:(-546.046,253.081) rect outer: offset:(185.031,-5162.95), size:(10238.4,5971.77)

Thanks,
Thomas

delete close retag edit

Sort by » oldest newest most voted

Kristian K
21 1 1

Hello,

I have had the same problem, also working with ultra-wide angle lenses. It seems the problem is the StereoRectify function which may calculate some invalid inner and outer rectangles as a basis for centering and scaling the rectified images. I think the reason for this is that it tries to undistort the extreme boundaries of the original image, and this does not always go well, especially in the corners of images when you have a lot of radial distortion. My solution to the problem has been to rewrite StereoRectify so that it only calculates the inner rectangle and only on the basis of four points on the image borders: North, south, east and west. That way I get a stable behaviour. The price to pay is that the free scaling parameter alpha will not work as specified, but for my purposes it is enough to perform some scaling relative to the inner rectangle.

Would be possible to share that piece of code you modified somewhere Kristian? It would help a lot other users. Cheers

( 2013-11-26 05:18:19 -0500 )edit

jensenb
511 7 12

I have also run into this issue with my setup, although my lens distortion isn't even that heavy. I also had the problem with the calculation of the inner and outer rectangles for determining the new camera matrices. The way OpenCV calculates these inner and outer rectangles is based on the assumption that the radial distortion is monotonic, so they assume that the edges of the input images should contain the most heavily distorted points and only sample along the image border for determining the most extreme distortion.

This however, is not necessarily the case, when you use the higher order radial model (with k_2 and k_3) or the "rational" model it is possible that the camera calibration converges to a set of distortion parameters that are not monotonic, and so these assumptions are invalid and resulting inner and outer rectangles make no sense, as can be seen in the example image by Thomas. This has happened to me on occasion, where the outer third of my image has a lower distortion factor than the inner two thirds.

I think the only real solution to this would be to modify the camera calibration optimization objective function to include constraints so that the distortion factors are monotonic throughout the image.

Hi Jenseb. How did you manage to solve your situation? Did you finally implement these constraints? I made a look to the code, but it doesn't seem easy to implement it to me...

( 2013-11-26 05:22:52 -0500 )edit