# Chessboard and Circle calibration provide radically different results

I have a 35mm nominal focal length camera that I'm trying to calibrate using a PyQt front end I'm building onto OpenCV's python library. It finds and displays matched points. Everything works except for the returns on cv2.calibrateCamera() when using a 4 x 11 asymmetric circle grid and cv2.findCirclesGrid().

Using a 6 x 9 chessboard pattern, I obtain a focal length (converted to mm) of 35.8 mm. I'd accept this for now with the 35mm lens I'm using. When I use the same camera/lens and a circle grid pattern I obtain a focal length of 505.9 mm. This is clearly wrong. The k and p coefficients are also enormous. What am I missing? See code below.

Edited down to show only relevant bits. Some variables are defined elsewhere.

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
params = []
def objParams (grid):
if grid == "Checkerboard":
param1 = np.zeros((6 * 9, 3), np.float32)
param2 = np.mgrid[0:9,0:6].T.reshape(-1,2)
elif grid == "Circle Grid":
param1 = np.zeros((4 * 11, 3), np.float32)
param2 = np.mgrid[0:4,0:11].T.reshape(-1,2)
params.append(param1)
params.append(param2)
objParams(pattern)
objp = params
objp[:, :2] = params

# Arrays to store object points and image points from all the images.
objpoints = []  # 3d point in real world space
imgpoints = []  # 2d points in image plane.

images = glob.glob(folder + '/*.jpg')
i = 0
for fname in images:
smimg = cv2.resize(img, (0, 0), fx=scaleTo, fy=scaleTo)
gray = cv2.cvtColor(smimg, cv2.COLOR_BGR2GRAY)
# step counter
i += 1
# Find the chess board corners
if pattern == 'Checkerboard':
print("Now finding corners on image " + str(i) + ".")
ret, corners = cv2.findChessboardCorners(gray, (9, 6), None)
# If found, add object points, image points
if ret == True:
print("Corners found on image " + str(i) + ".")
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
print("Drawing corners on image " + str(i) + ".")
# Draw and display the corners
cv2.drawChessboardCorners(smimg, (9, 6), corners, ret)
if saveMarked == True:
cv2.imwrite(os.path.join(outfolder, os.path.basename(fname)), smimg)
cv2.imshow(os.path.basename(fname), smimg)
cv2.waitKey(500)
cv2.destroyAllWindows()
elif pattern == 'Circle Grid':
print("Now finding circles on image " + str(i) + ".")
ret, circles = cv2.findCirclesGrid(gray, (4,11), flags = cv2.CALIB_CB_ASYMMETRIC_GRID)
# If found, add object points, image points
if ret == True:
print("Circles found on image " + str(i) + ".")
objpoints.append(objp)
imgpoints.append(circles)
# Draw and display the circles
cv2.drawChessboardCorners(smimg, (4, 11), circles, ret)
if saveMarked == True:
cv2.imwrite(os.path.join(outfolder, os.path.basename(fname)), smimg)
cv2.imshow(os.path.basename(fname), smimg)
cv2.waitKey(500)
cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], paramMtx, None, None)


EDIT: Here are the outputs as well as my pretty print statements:

Chessboard ...

edit retag close merge delete

How many images did you use for the calibration process ?

I know only the basics with Python, so maybe I missed the information, how did you construct your object point ? For asymetric grid, the formula is different as you can see it here: https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/calibration.cpp#L125.

Also, I don't see in your code if you have supply the real square size ? Otherwise, you will be dependent of a scale factor.

You can look at this tutorial for C++: Camera Calibration with OpenCV.

I used 35 images for each pattern. I think you are correct that the object point vectors aren't being constructed correctly. I only know the basics of C++ so I'm having trouble translating that. The arrays are set up in the objParams() function near the top as 0's. Values are appended to these arrays following the check on whether or not points were found, if ret == True: I'm pretty much building on this tutorial (which didn't work initially, for the record), but it's a little light on details. I'm not quite sure how I should construct an object point array for asymmetric circles.

As for the square size, I'm just scaling it for now

Sort by » oldest newest most voted

I'm posting this as a late answer, for folks coming from Google.

The problem is almost certainly that your object points are incorrect. What you've done is symmetric grid (like a chessboard) which assumes the same number of grid points per row. The asymmetric pattern doesn't. This code is adapted from the C++ source. It's worth plotting your object points to double check that it actually looks like your pattern:

objectPoints= []
grid_size = 0.03 # 3cm, or whatever
rows, cols = 4, 11

for i in range(cols):
for j in range(rows):
objectPoints.append( (i*grid_size, (2*j + i%2)*grid_size, 0) )

objectPoints= np.array(objectPoints).astype('float32')

scatter(objectPoints[:,0], objectPoints[:,1])


Note that the rows are staggered in the pattern, that's where the asymmetry comes from. The grid_size is the x-spacing between the circles. The loop plots a 4x11 grid, but shifts alternate columns down by grid_size. Here's a plot of the pattern as generated: more

Official site

GitHub

Wiki

Documentation