Ask Your Question

bart.p1990's profile - activity

2021-12-16 10:48:15 -0600 received badge  Famous Question (source)
2019-11-18 20:09:54 -0600 received badge  Notable Question (source)
2019-05-29 06:42:16 -0600 received badge  Popular Question (source)
2017-05-26 10:29:17 -0600 commented question Re-distorting a set of points after camera calibration

Thank you! I have not been able to make a difference using this suggestion.

2017-05-26 10:19:39 -0600 received badge  Editor (source)
2017-05-26 10:18:40 -0600 asked a question High RMS calibrateCamera Python

I have made a script that calibrates using an assymetric pattern imaged by a thermal camera. For some reason my RMS error is around 15. However, undistorting my images does a decent job. I have made a GUI to adjust the blobdetector values for each image manually and return the ret, and corner values on approval.

Am I misunderstanding that the return value for the function should return an error roughly between 0.1 and 1? Or should I use another function to correctly calculate the RMS?

# What images to calibrate
folder = "images/4/"
imageList = glob.glob(folder + "*.*")
print ("Number of images in directory: " + str(len(imageList)))

imgpoints = [] # 2d points in image plane.
objp = np.zeros((gridH * gridW, 3), np.float32)
objp[:, :2] = np.mgrid[0:gridH, 0:gridW].T.reshape(-1, 2)
objpoints = []

my_file = Path(folder + "initialPoints.npy")
if not my_file.is_file():

    for image in imageList:
        image = cv2.imread(image)
        ret, circles = createFindCirclesWindow(image)

        if ret:
            imgpoints.append(circles)
            objpoints.append(objp)

    np.save(folder + "initialPoints", imgpoints)

else:
    imgpoints = np.load(folder + "initialPoints.npy")
    # Match the number of objp for each array if imgpoints
    for a in imgpoints:
        objpoints.append(objp)

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# Calculate camera calibration matrices
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
    objpoints, imgpoints, (imgW, imgH), None, None, flags = cv2.CALIB_FIX_K4+cv2.CALIB_FIX_K5, 

criteria=criteria)

print (ret)
2017-05-20 06:49:09 -0600 commented question Re-distorting a set of points after camera calibration

Can you explain more in detail what you mean? This is my current code to get the camera calibration:

    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
    objpoints, imgpoints, (imgW, imgH), None, None) 
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (imgW, imgH), 1, (imgW, imgH))

Which flags would I need to set to turn off the fisheye distortion model / possibly get better calibration?

My image turns out pretty bad when I try to undistort it the the next time around

2017-05-19 16:36:14 -0600 commented question Re-distorting a set of points after camera calibration

Hi LBerger,

The iterative method is ment to enhance the accuracy of the circle positions, since these thermal camera images are usually very low resolution, and thus less accurate.

The circles are made by using a laser cutter and the cardboard is heated so that it thermally contrasts with the background.

2017-05-19 16:26:54 -0600 commented answer Re-distorting a set of points after camera calibration

This worked perfectly! Thank you so much Here is my code I used in Python.

ptsOut = np.array(corners, dtype='float32')
ptsTemp = np.array([], dtype='float32')
rtemp = ttemp = np.array([0,0,0], dtype='float32')
ptsOut = cv2.undistortPoints(ptsOut, newcameramtx, None)
ptsTemp = cv2.convertPointsToHomogeneous( ptsOut );
output = cv2.projectPoints( ptsTemp, rtemp, ttemp, mtx, dist, ptsOut );

I have no idea why the initial undistortion does not work that.

2017-05-19 16:25:46 -0600 received badge  Scholar (source)
2017-05-19 16:25:43 -0600 received badge  Supporter (source)
2017-05-16 00:48:07 -0600 received badge  Student (source)
2017-05-15 19:03:40 -0600 asked a question Re-distorting a set of points after camera calibration

I am working on a project in Python to calibrate a small thermal camera sensor (FLIR Lepton). Because of limited resolution initial distortion removal is not very exact. By using an iterative method I should be able to refine this calibration (for those of you with access to scientific articles, see this link). This requires me to take the following steps:

  1. Use a set of images of a calibration pattern to estimate the initial distortion
  2. Undistort the images
  3. Apply a perspective correction to the undistorted images
  4. Re-estimate the calibration point positions
  5. Remap these refined calibration points back to the original images
  6. Use the refined points to re-estimate the distortion
  7. Repeat until the RMS-error converges

I am stuck at step four. Below you see the commands I used to remove the camera distortion from the original image using the camera matrices and the distortion matrix.

mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)

So far I am not able to figure out how to reverse these commands to remap the new point positions to the original image. So far I am able to do (roughly in order of the steps above):

image description image description image description image description image description image description

I have looked online and found the same question a bunch of times, with several examples in C++, which I cannot fully comprehend and modify for my purposes. I have tried the solution suggested by this post but this has not yielded the desired results, see last image above. There is my code of that solution

def distortBackPoints(x, y, cameraMatrix, dist):

fx = cameraMatrix[0,0]
fy = cameraMatrix[1,1]
cx = cameraMatrix[0,2]
cy = cameraMatrix[1,2]
k1 = dist[0][0] * -1
k2 = dist[0][1] * -1
k3 = dist[0][4] * -1
p1 = dist[0][2] * -1
p2 = dist[0][3] * -1
x = (x - cx) / fx
y = (y - cy) / fy

r2 = x*x + y*y

xDistort = x * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)
yDistort = y * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2)

xDistort = xDistort + (2 * p1 * x * y + p2 * (r2 + 2 * x * x))
yDistort = yDistort + (p1 * (r2 + 2 * y * y) + 2 * p2 * x * y)

xDistort = xDistort * fx + cx;
yDistort = yDistort * fy + cy;

return xDistort, yDistort

Then using this command to call the function

corners2 = []
for point in corners:
    x, y = distortBackPoints(point[0][0], point[0][1], newcameramtx, dist)
    corners2.append([x,y])

I am new to OpenCV and computer vision, so my knowledge about the algebra of these solutions is limited. Any hands on examples or correction to my current code would be greatly appreciated.

Kind regards,

Bart