Ask Your Question
2

Re-distorting a set of points after camera calibration

asked 2017-05-15 18:39:03 -0600

bart.p1990 gravatar image

I am working on a project in Python to calibrate a small thermal camera sensor (FLIR Lepton). Because of limited resolution initial distortion removal is not very exact. By using an iterative method I should be able to refine this calibration (for those of you with access to scientific articles, see this link). This requires me to take the following steps:

  1. Use a set of images of a calibration pattern to estimate the initial distortion
  2. Undistort the images
  3. Apply a perspective correction to the undistorted images
  4. Re-estimate the calibration point positions
  5. Remap these refined calibration points back to the original images
  6. Use the refined points to re-estimate the distortion
  7. Repeat until the RMS-error converges

I am stuck at step four. Below you see the commands I used to remove the camera distortion from the original image using the camera matrices and the distortion matrix.

mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)

So far I am not able to figure out how to reverse these commands to remap the new point positions to the original image. So far I am able to do (roughly in order of the steps above):

image description image description image description image description image description image description

I have looked online and found the same question a bunch of times, with several examples in C++, which I cannot fully comprehend and modify for my purposes. I have tried the solution suggested by this post but this has not yielded the desired results, see last image above. There is my code of that solution

def distortBackPoints(x, y, cameraMatrix, dist):

fx = cameraMatrix[0,0]
fy = cameraMatrix[1,1]
cx = cameraMatrix[0,2]
cy = cameraMatrix[1,2]
k1 = dist[0][0] * -1
k2 = dist[0][1] * -1
k3 = dist[0][4] * -1
p1 = dist[0][2] * -1
p2 = dist[0][3] * -1
x = (x - cx) / fx
y = (y - cy) / fy

r2 = x*x + y*y

xDistort = x * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)
yDistort = y * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)

xDistort = xDistort + (2 * p1 * x * y + p2 * (r2 + 2 * x * x))
yDistort = yDistort + (p1 * (r2 + 2 * y * y) + 2 * p2 * x * y)

xDistort = xDistort * fx + cx;
yDistort = yDistort * fy + cy;

return xDistort, yDistort

Then using this command to call the function

corners2 = []
for point in corners:
    x, y = distortBackPoints(point[0][0], point[0][1], newcameramtx, dist)
    corners2.append([x,y])

I am new to OpenCV and computer vision, so my knowledge about the algebra of these solutions is limited. Any hands on examples or correction to my current code would be greatly appreciated.

Kind regards,

Bart

edit retag flag offensive close merge delete

Comments

calibrateCamera is already an iterative method

Is it a flat grid?

How do you make circle in IR ?

LBerger gravatar imageLBerger ( 2017-05-16 01:02:39 -0600 )edit

Hi LBerger,

The iterative method is ment to enhance the accuracy of the circle positions, since these thermal camera images are usually very low resolution, and thus less accurate.

The circles are made by using a laser cutter and the cardboard is heated so that it thermally contrasts with the background.

bart.p1990 gravatar imagebart.p1990 ( 2017-05-19 16:36:14 -0600 )edit

@bart.p1990 Thanks I will try laser cutter. About Calibration now If you calibrate calibrated image I think you are using k1,k2,k3,k4,k5 and k6. I don't know python but I think you should try it. with a field of view of 60° (lepton 3) I don't think you would need fisheye module

LBerger gravatar imageLBerger ( 2017-05-20 02:07:52 -0600 )edit

Can you explain more in detail what you mean? This is my current code to get the camera calibration:

    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
    objpoints, imgpoints, (imgW, imgH), None, None) 
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (imgW, imgH), 1, (imgW, imgH))

Which flags would I need to set to turn off the fisheye distortion model / possibly get better calibration?

My image turns out pretty bad when I try to undistort it the the next time around

bart.p1990 gravatar imagebart.p1990 ( 2017-05-20 06:49:09 -0600 )edit

sometimes it is difficult to fit parameters. It is better to fit first k1 k2 k3 and reuse this parameter as initial value to fit all parameters I don't know python. I give you c++ function :

int typeCalib=CALIB_FIX_RATIONAL_MODEL+CALIB_FIX_K4+CALIB_FIX_K5+CALIB_FIX_K6+CALIB_FIX_ZERO_TANGENT_DIST;
rms = calibrateCamera(pointsObjets, pointsCamera, webcamSize, cameraMatrix,
                    distCoeffs, rvecs, tvecs, typeCalib2D, TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 50000, 1e-4));
typeCalib=CALIB_FIX_RATIONAL_MODEL+CALIB_USE_INTRINSIC_GUESS;
rms = calibrateCamera(pointsObjets, pointsCamera, webcamSize, cameraMatrix,
                    distCoeffs, rvecs, tvecs, typeCalib2D, TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 50000, 1e-4));
LBerger gravatar imageLBerger ( 2017-05-20 07:35:20 -0600 )edit

Thank you! I have not been able to make a difference using this suggestion.

bart.p1990 gravatar imagebart.p1990 ( 2017-05-26 10:29:17 -0600 )edit

Can you post images?

LBerger gravatar imageLBerger ( 2017-05-26 13:35:25 -0600 )edit

1 answer

Sort by » oldest newest most voted
2

answered 2017-05-15 19:36:25 -0600

Tetragramm gravatar image

I'm assuming your x and y to distort are in the range 0->image dimensions, not the normalized version.

There are three steps. First, normalize the points to be independent of the camera matrix using undistortPoints with no distortion matrix. Second, convert them to 3d points using the convertPointsToHomogeneous. Thirdly, project them back to image space using the distortion matrix.

vector<Point2d> ptsOut;
vector<Point3d> ptsTemp;
Mat rtemp, ttemp;
rtemp.create( 3, 1, CV_32F );
rtemp.setTo( 0 );
rtemp.copyTo( ttemp );
undistortPoints( ptsOut, ptsOut, dist::cameraMatrix, noArray());
convertPointsToHomogeneous( ptsOut, ptsTemp );
projectPoints( ptsTemp, rtemp, ttemp, dist::cameraMatrix, dist::distortion, ptsOut );

I'm seeing a bit of a problem right at the very corner that I'm not sure why. Probably where the distortion goes iffy anyway.

edit flag offensive delete link more

Comments

This worked perfectly! Thank you so much Here is my code I used in Python.

ptsOut = np.array(corners, dtype='float32')
ptsTemp = np.array([], dtype='float32')
rtemp = ttemp = np.array([0,0,0], dtype='float32')
ptsOut = cv2.undistortPoints(ptsOut, newcameramtx, None)
ptsTemp = cv2.convertPointsToHomogeneous( ptsOut );
output = cv2.projectPoints( ptsTemp, rtemp, ttemp, mtx, dist, ptsOut );

I have no idea why the initial undistortion does not work that.

bart.p1990 gravatar imagebart.p1990 ( 2017-05-19 16:26:54 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2017-05-15 17:25:29 -0600

Seen: 9,843 times

Last updated: May 15 '17