Ask Your Question

rectify fisheye stereo setup

asked 2012-07-20 00:44:24 -0600

Thomas gravatar image

updated 2012-07-20 15:06:27 -0600

Hey all,

I'm trying to use the GoPro Hero2 stereo setup (the cameras use fisheye lenses with a 170 degree field of view) and finished calibrating the cameras using the caltech toolbox (using 5 distortion parameters). My problem is that OpenCV somehow doesn't crop the valid region in both images during rectification but rather includes wrapped regions in the corners and blank regions at the borders (see for an example of the rectification result).

The problem associated with this is that the region of interest returned by stereoRectify is of size (0,0) even though it's supposed to mark the region of valid pixels - or at least some region, but certainly not nothing? At the moment I'm using an alpha value of -1 as this provides the most acceptable output so far. When I set alpha to 0, I only receive valid pixels, but the images aren't properly rectified any longer (i.e. the optical image centers aren't at the same y-coordinate .. I also tried setting cv::CALIBZERODISPARITY but that didn't fix this problem either).

I've also tried sending the raw images + calibration parameters through the ROS pipeline and have the stereoimageproc node do the rectification, but that led to the same result.

This is the code I'm using to rectify my images:

// <load calibration parameters etc. >

int frame = 0;
cv::Mat mapx, mapy;

    //read frame from video
    cv::Mat img;

    if(frame == 0)
        //build undistortion + rectification map
        cv::Mat rect1, rect2, proj1, proj2;
        cv::Mat Q; // disparity-to-depth mapping matrix
        double alpha = -1;
        cv::Size imgSize(img.cols, img.rows);
        cv::Size newImgSize = imgSize;
        cv::Rect roi_left, roi_right;
        cv::stereoRectify(M_left, D_left, M_right, D_right, imgSize, R, T, rect1, rect2, proj1, proj2, 
                          Q, 0/*cv::CALIB_ZERO_DISPARITY*/, alpha, newImgSize, &roi_left, &roi_right);

            cv::initUndistortRectifyMap(M_left, D_left, rect1, proj1, newImgSize, CV_16SC2, mapx, mapy);  
            cv::initUndistortRectifyMap(M_right, D_right, rect2, proj2, newImgSize, CV_16SC2, mapx, mapy);


    //undistort and rectify image
    cv::Mat imgRect;
    cv::remap(img, imgRect, mapx, mapy, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar(0));

    // < do stuff >

EDIT: I've started trying to debug the opencv code to figure out what's going on and here is an image showing the inner (green) and outer (red) rectangles which are calculated inside stereoRectify using the icvGetRectangles function to get the regions of interest (which is the intersection between inner and outer rectangle):
Other than those regions (or especially the inner one) not making a lot of sense the problem also is that the inner rectangle always has a negative width: rect inner: offset:(1331.3,268.081), size:(-546.046,253.081) rect outer: offset:(185.031,-5162.95), size:(10238.4,5971.77)


edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted

answered 2012-12-18 08:09:21 -0600

Kristian K gravatar image


I have had the same problem, also working with ultra-wide angle lenses. It seems the problem is the StereoRectify function which may calculate some invalid inner and outer rectangles as a basis for centering and scaling the rectified images. I think the reason for this is that it tries to undistort the extreme boundaries of the original image, and this does not always go well, especially in the corners of images when you have a lot of radial distortion. My solution to the problem has been to rewrite StereoRectify so that it only calculates the inner rectangle and only on the basis of four points on the image borders: North, south, east and west. That way I get a stable behaviour. The price to pay is that the free scaling parameter alpha will not work as specified, but for my purposes it is enough to perform some scaling relative to the inner rectangle.

edit flag offensive delete link more


Would be possible to share that piece of code you modified somewhere Kristian? It would help a lot other users. Cheers

Josep Bosch ( 2013-11-26 05:18:19 -0600 )edit

answered 2013-07-05 06:44:48 -0600

jensenb gravatar image

updated 2013-07-05 06:45:46 -0600

I have also run into this issue with my setup, although my lens distortion isn't even that heavy. I also had the problem with the calculation of the inner and outer rectangles for determining the new camera matrices. The way OpenCV calculates these inner and outer rectangles is based on the assumption that the radial distortion is monotonic, so they assume that the edges of the input images should contain the most heavily distorted points and only sample along the image border for determining the most extreme distortion.

This however, is not necessarily the case, when you use the higher order radial model (with k_2 and k_3) or the "rational" model it is possible that the camera calibration converges to a set of distortion parameters that are not monotonic, and so these assumptions are invalid and resulting inner and outer rectangles make no sense, as can be seen in the example image by Thomas. This has happened to me on occasion, where the outer third of my image has a lower distortion factor than the inner two thirds.

I think the only real solution to this would be to modify the camera calibration optimization objective function to include constraints so that the distortion factors are monotonic throughout the image.

edit flag offensive delete link more


Hi Jenseb. How did you manage to solve your situation? Did you finally implement these constraints? I made a look to the code, but it doesn't seem easy to implement it to me...

Josep Bosch ( 2013-11-26 05:22:52 -0600 )edit

@Josep Were you able to solve your issue? I am running into this issue as well and have not figured it out yet

pmt17 ( 2014-10-08 13:10:18 -0600 )edit

answered 2015-03-29 22:21:10 -0600

I see this is an old thread, but this problem set me back a couple of months, so hopefully this helps somebody. I used the OpenCV chessboard calibration technique but had the same problem. Everything before the stereo calibration was easy--I just followed the steps in this book: ( I'll assume you already have the left and right camera matricies, left and right distortion coefficients, and an appropriate set of chessboard corners (image points) for the stereo calibration.

First, I used initUndistortRectifyMap() create pixel mappings that remove the distortion on the individual camera outputs:

// initUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs, InputArray R, InputArray newCameraMatrix, Size size, int m1type, OutputArray map1, OutputArray map2) 
initUndistortRectifyMap(left_cameraMatrix, left_distCoeffs, empty, left_cameraMatrix, left_image_size, 32FC1,   monomap_l1,monomap_l2);
initUndistortRectifyMap(right_cameraMatrix, right_distCoeffs, empty, right_cameraMatrix, right_image_size, CV_32FC1, monomap_r1, monomap_r2);

Then I applied the undistortPoints() to my image points.

// undistortPoints( InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray R=noArray(), InputArray P=noArray())
undistortPoints( left_image_points, left_image_points, l_cameraMatrix, l_distCoeffs, empty, l_cameraMatrix);
undistortPoints( right_image_points, right_image_points, r_cameraMatrix, r_distCoeffs, empty, r_cameraMatrix);

Now if I remap() the images using the "monomaps" and drawChessboardCorners() on those images, the chessboard corners line up with the distortion-corrected images.

Last step: use the distortion-corrected chessboard corners to do stereoRectify():

Mat R, T, E, F; //stereo calibration information
Mat zerodistortion = Mat::zeros(1,5,CV_32FC1);  //empty distortion matrix
stereoCalibrate(object_points, left_image_points, right_image_points, left_cameraMatrix, zerodistortion, right_cameraMatrix, zerodistortion, left_image_size, R, T, E, F);

This worked! I was at a conference recently where I spoke with a guy teaching in Spain and he said he was doing the same exact thing with his cameras.

Now, to see the rectified outputs, you have to create the stereo rectification maps:

// initUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs, InputArray R, InputArray newCameraMatrix, Size size, int m1type, OutputArray map1, OutputArray map2) 
initUndistortRectifyMap(l_cameraMatrix, zerodistortion, Rl, Pl, image_size, CV_32FC1, map_l1, map_l2);
initUndistortRectifyMap(r_cameraMatrix, zerodistortion, Rr, Pr, image_size, CV_32FC1, map_r1, map_r2);

Then (I was too lazy to fix this), to see the rectified output, you have to remap your input images twice: once for monocular undistortion and once for stereo rectification:

// Remap images to remove monocular distortions
remap(left, left_undist, monomap_l1, monomap_l2, INTER_LINEAR);
remap(right, right_undist, monomap_r1, monomap_r2, INTER_LINEAR);

// Remap images to rectify
remap(left_undist, left_rect, map_l1, map_l2, INTER_LINEAR);
remap(right_undist, right_rect, map_r1, map_r2, INTER_LINEAR);

Long story short, stereoCalibrate() couldn't handle the extreme distortion, but it you can remove that distortion before calling stereoCalibration().

edit flag offensive delete link more
Login/Signup to Answer

Question Tools



Asked: 2012-07-20 00:44:24 -0600

Seen: 1,626 times

Last updated: Jul 05 '13