# Undistortion at far edges of image

I have obtained the camera matrix and distortion coefficients for a GoPro Hero 2 using calibrateCamera() on a list of points obtained with findChessboardCorners(), essentially following this guide.

I then undistorted the image with initUndistortRectifyMap() and remap() (R as itendity matrix). All looks fine, but the borders are a bit cropped so I tried "zooming out" with getOptimalNewCameraMatrix() with centerPrincipalPoint=True and what I find is that the edges are wrapped around into a sort of bubble shape, instead of being "pointy". Information where these points would be is lost and replaced with information towards the center of the immage. Is this normal (i.e. is this an artifact of the mapping function) or could this be caused by a poor camera matrix? I tested the undistortion with a camera matrix and distortion coefficients I found online for the same camera and footage and I also tried applying the undistortion to the calibration footage and the effect is the same. I'm attaching a sample of the result.

I also noticed that the free scaling parameter of getOptimalNewCameraMatrix() was behaving arbitrarily, unlike what is written in the documentation about choosing a parameter between 0 and 1 and that the effect is very sensitive to the original camera matrix.

edit retag close merge delete

Sort by » oldest newest most voted

What you are seeing here are deficiencies of both the getOptimalNewCameraMatrix and initUndistortRectifyMap (actually undistortPoints) functions that only become noticeable for cameras with strong radial distortion. In short getOptimalNewCameraMatrix tries to estimate how much the current camera matrix would have to be scaled (and the resulting image cropped), in order to avoid having empty (black) dents or outwards bubbles visible in the output image. The function makes the assumption that the radial distortion is monotonous (from your distortion coeffecients), that you either have constant outwards or inwards warping in your images. With this assumption the getOptimalNewCameraMatrix discretely samples an 8x8 grid of points evenly distributed throughout the image, undistorts them (using undistortPoints) with an approximative algorithm that again assumes the distortion function is monotonous, and uses a heuristic to find the subset of the grid points guaranteed to be within in the image after undistortion. With these points it can estimate how much to zoom the original camera matrix so only valid image pixels are visible.

So here it where things break down and why you get that weird reflective ring. The OpenCV calibration algorithm does not guarantee that the estimated distortion function is monotonous, in fact it returns an arbitrary function highly sensitive to your input images, if you have enabled the coeffecients K_2 and K_3. I have noticed lots of times that the estimated distortion function changes from outwards warping to strong inwards warping around the border of the image (in cameras with strong radial distortion). So what happens to the getOptimalNewCameraMatrix and undistortPoints functions when the monotony constraint is violated? The first is that at points where the distortion function changes sign (and warping type), the undistortPoints function estimates completely wrong undistorted point locations. The second and more severe effect is that the getOptimalNewCameraMatrix then fails to estimate the correct visible subset of the image, resulting in arbitrary and counter intuitive results.

In your case, what you are seeing is that your distortion function is not monotonous, so that around the image border there is a strong switch in the distortion type from outwards to inwards. The ring you see is a a result of the distortion function being so strongly inward warping at those positions that it samples part of the image again. Because of the switch between outward and inward warping around the border (and errors discussed above), the getOptimalNewCameraMatrix mistakenly believes it must zoom out and not zoom in.

There is no quick solution and guaranteed to this problem. You must recalibrate your camera and make sure to get plenty of views close the image border. Then sample the distortion function around the image border and make sure it does not switch distortion type. Keep repeating this process until you converge to an acceptable solution.

more

I have seen this effect as well, especially as observed with strong radial distortion cameras. This explanation is good, but is there a reference in the literature to the algorithm used in getOptimalNewCameraMatrix? We've been experimenting some, and its not conclusive but it seems sometimes the algorithm mis-behavior seems a bit arbitrary (i.e., another adjacent point gives a seemingly sane answer).

( 2014-03-17 17:46:34 -0500 )edit
1

No I don't think you will find the strategy used for getOptimalNewCameraMatrix discussed anywhere in the literature. The assumption is not problem for radial distortion modelled using only second order terms, but adding in the 4th and 6th order terms clearly do not work. The real problem is that the distortion estimation part of the calibration procedure is unconstrained. For a recent discussion of other radial distortion models see Brito et al. 2013 "Radial Distortion Self-Calibration"

( 2014-03-18 02:52:41 -0500 )edit

+1, this answer needs more vote !

( 2015-02-13 16:18:43 -0500 )edit

For me, I was able to "solve" the problem simply by zooming in. The "fish bowl" effect of radial distortion is most pronounced near the edges of the field-of-view, so by zooming in, you are effectively "cropping" your image and thereby reducing the extreme radial distortion. This may not be practical for your application, if you require the widest angle possible, or if your camera doesn't have zoom, but it worked for me! (I wouldn't have thought of this solution if it weren't for this answer, where you noted that these deficiencies in the algorithm are only apparent with cameras with strong radial distortion, so thanks @jensenb!)

( 2016-11-17 09:29:40 -0500 )edit

Maybe it is better solved with the fisheye calibration in OpenCV 3+?

( 2017-04-24 11:14:14 -0500 )edit

Here is a python implementation of the relevant functions that uses Newton-Raphson followed by a binary line searches for better calculations of the inverse distortion function. This will result in a more correct OptimalCameraMatrix.

The repository also contains a function that enables using calibration images where not all of the corners are visible or detected, hence allowing better constraints on the areas near the boundary of the image. This is required so the distortion parameters will be calculated more precisely and the undistorted image will be more accurate. Moreover, it reduces the chance that the OptimalCameraMatrix bug will be encountered.

Please note that this implementation uses only the first 5 distortion parameters.

more

1

Thanks for your solution, worked for me.

( 2019-11-07 02:56:49 -0500 )edit

Ok, I've been wracking my brain on this and I think I've finally solved it!

Tl;dr -- make sure the edges are well represented by the checkerboard in your calibration images.

The key is that the distortion is extreme at the edges, and my theory was that without good checkerboard representation at the edges, the edge distortion solution would not be very accurate. This seems to have fixed it. If you want some numbers for navel-gazing, here are my distCoeffs for two different calibration image sets: one close up, the other mostly far away.

Note: the coefficients are both positive and negative (meaning "moustache distortion" and not monotonic.) Note in particular the really big values for the far away images -- indicating severe edge distortion. My idea was that since distortion is mild in the center, if all my checkerboards are in the center, the distortion could solve in widely varying ways, subject to factors like roundoff error and such.

Looks like my theory is correct -- the coefficients and ROI converge nicely as I increase the images for the near images. Not quite so nice for the far ones.

Bonus ProTip (from Paul Debevec's PhD thesis, chapter 4) -- you can't do gamma correction (or any lightening) on calibration images, because this can shrink or expand the edges of white on black squares!

(k1, k2, p1, p2, k3 -- p is the center point of the distortion.)

Far:

• Dec 13, 10 far images: dist [[-0.36835 5.26118 -0.01009 -0.00408 -31.85337]] roi (31, 94, 1843, 889)
• Dec 13, 50 far images: dist [[ 0.10918 0.78833 -0.00348 0.00295 -5.18248]] roi (0, 0, 0, 0)
• Dec 13, 100 far images: dist [[ 0.11246 0.55816 -0.00238 0.00048 -3.41276]] roi (0, 0, 0, 0)

Close:

• Dec 14, 10 close images: dist [[ 0.09757 -0.11132 -0.0049 -0.00319 0.05245]] roi (8, 14, 1905, 1058)
• Dec 14, 50 close images: dist [[ 0.16167 -0.38233 -0.00299 -0.00002 0.21451]] roi (9, 5, 1901, 1067)
• Dec 14, 100 close images: dist [[ 0.17922 -0.45282 -0.00315 -0.00019 0.29558]] roi (8, 4, 1903, 1069)
• Dec 14, 200 close images: dist [[ 0.18474 -0.46364 -0.00253 -0.00078 0.30383]] roi (7, 5, 1904, 1068)
• Dec 14, 287 close images: dist [[ 0.18519 -0.46466 -0.00253 -0.00083 0.30463]] roi (7, 5, 1904, 1068)
more

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2014-02-15 17:01:39 -0500

Seen: 8,893 times

Last updated: Mar 04 '14