undistortPoints() returns odd/nonsensical values despite apparently functional camera calibration

asked 2019-02-23 23:40:47 -0600

edelmanjm gravatar image

updated 2019-12-09 08:05:05 -0600

Akhil Patel gravatar image

Not the most advanced OpenCV user/math-skilled individual, so please bear with me.

I've been following this short tutorial in an effort to calibrate a fisheye lens in OpenCV. So far, everything seems to be working as the tutorial prescribes: I was able to obtain working camera and distance coefficients, and successfully undistort images (i.e. running it through the provided code produces images that appear correct.) Following the second part of the tutorial, I've also been able to adjust the balance.

However, my application is that I want to undistort certain points (namely contours and the centers of bounding boxes) rather than entire images for performance reasons. As such, I thought I'd use cv2.undistortPoints(). My understanding is this should produce "ideal point coordinates", i.e. pixel coordinates corrected for the lens distortion. However, this doesn't appear be working as I expected.

Since the tutorial gives a K and a D matrix at the end, I figured I'd just plug those into undistortPoints.

>>> cv2.fisheye.undistortPoints(
  np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
  np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
  np.array([[-0.016205134569390902], [-0.02434305021164351], [0.024555436941429715], [-0.008590717479362648]])
)

array([[[  0.94239926,   0.67358345],
        [  0.13487473,  -0.09684527],
        [ 29.207176  , -22.761654  ],
        [  1.4594778 ,   1.1426234 ]]], dtype=float32)

Those sure aren't pixel coordinates. I thought that maybe they were normalized points, with the bounds of the image being -1 and 1, but these values still don't make sense even within that context.

I also attempted to plug in the values obtained from the second part of the tutorial using balance=1.0. If you're looking at the tutorial, this corresponds to cv2.undistortPoints(my_test_points, scaled_K, dist_coefficients, R=numpy.eye(3), P=new_K):

>>> cv2.fisheye.undistortPoints(
  np.asarray([[[0, 0], [2592, 0], [0, 1944], [2592, 1944]]], dtype=np.float32),
  np.array([[1076.7148792467171, 0.0, 1298.9712963540678], [0.0, 1078.515014983842, 929.9968760065017], [0.0, 0.0, 1.0]]),
  np.array([[-0.019215744220979738], [-0.022168383678588813], [0.018999857407644722], [-0.003693599912847022]]), 
  R=np.eye(3), 
  P=np.array([[416.0971612201596, 0.0, 1304.304969960433], [0.0, 416.79282483962464, 927.3730022048695], [0.0, 0.0, 1.0]])
)

array([[[ -8029.981 ,  -5755.497 ],
        [  9563.489 ,  -5012.9556],
        [ 27344.436 , -19400.076 ],
        [-39704.24  , -31231.846 ]]], dtype=float32)

Okay, those look more like pixel coordinates, but those still make no sense.

At this point, I'm really not sure what to do. I've been struggling with this for quite some time now, so any and all help is truly appreciate. If you need the images, matrices, or anything else from me, I'm happy to provide it.

My camera is a 175° FOV RPi Camera (K) mounted on a Raspberry Pi, with the resolution at the maximum 2592×1944 for the purposes this question. I'm using OpenCV 3.4.4 with Python 3.

edit retag flag offensive close merge delete

Comments

Did you ever solve this problem?

angelo gravatar imageangelo ( 2020-04-16 18:42:58 -0600 )edit