camera calibration focal length is wrong

asked 2019-10-02 15:40:22 -0600

eric_engineer gravatar image

So I'm sure this is my error, just not sure what I'm doing wrong. I had started a post about how my Aruco markers Z distance was off by a factor of 2... But then I realized that must point to my focal length in my camera matrix being wrong and here I am. I got the focal length from the datasheet and then calculated it from fx,fy and the pixel size. I was off by more than 2x.

I guess my cal approach is wrong. I wrote some python code to run cameraCalibrate, and I took 25 pictures of this chess board that I printed out. But one thing I didn't do was keep the distance from the chessboard to the camera constant in all those pictures. Could that have screwed me up?

Another thing I did was just printed out the chessboard pattern in the openCV examples. It didn't exactly fit the paper so I let it scale a little bit... Maybe that's wrong too.

This is my calibration code in python. It runs, finds the chess board, and draws all the markers on it. But... my focal length is still coming out wrong. I'm thinking the way I made my chessboard, and took pictures is wrong.

import numpy as np
from matplotlib import pyplot as plt
import cv2
import glob
import os
import sys
import math



CHESSBOARD_SIZE = (7, 7)
CHESSBOARD_OPTIONS = (cv2.CALIB_CB_ADAPTIVE_THRESH |
        cv2.CALIB_CB_NORMALIZE_IMAGE | cv2.CALIB_CB_FAST_CHECK)

OBJECT_POINT_ZERO = np.zeros((7*7,3), np.float32)
OBJECT_POINT_ZERO[:, :2] = np.mgrid[0:7,0:7].T.reshape(-1,2)

OPTIMIZE_ALPHA = 0.25

TERMINATION_CRITERIA = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30,
        0.001)


REMAP_INTERPOLATION = cv2.INTER_LINEAR
DEPTH_VISUALIZATION_SCALE = 640

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((7*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:7].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
Limgpoints = [] # 2d points in image plane.
Rimgpoints = [] 


leftImageDir = 'show'
outputFile = 'output'

def readImagesAndFindChessboards(imageDirectory):
    print("Reading images at {0}".format(imageDirectory))
    imagePaths = glob.glob("{0}/*.jpg".format(imageDirectory))

    filenames = []
    objectPoints = []
    imagePoints = []
    imageSize = None

    for imagePath in sorted(imagePaths):
        image = cv2.imread(imagePath)
        grayImage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

        newSize = grayImage.shape[::-1]
        if imageSize != None and newSize != imageSize:
            raise ValueError(
                    "Calibration image at {0} is not the same size as the others"
                    .format(imagePath))
        imageSize = newSize

        hasCorners, corners = cv2.findChessboardCorners(grayImage,
                CHESSBOARD_SIZE, cv2.CALIB_CB_FAST_CHECK)

        if hasCorners:
            print("HAS corners")
            filenames.append(os.path.basename(imagePath))
            objectPoints.append(OBJECT_POINT_ZERO)
            cv2.cornerSubPix(grayImage, corners, (11, 11), (-1, -1),
                    TERMINATION_CRITERIA)
            imagePoints.append(corners)

        cv2.drawChessboardCorners(image, CHESSBOARD_SIZE, corners, hasCorners)
        cv2.imshow(imageDirectory, image)

        # Needed to draw the window
        cv2.waitKey(1)

    cv2.destroyWindow(imageDirectory)

    print("Found corners in {0} out of {1} images"
            .format(len(imagePoints), len(imagePaths)))

    return filenames, objectPoints, imagePoints, imageSize

(leftFilenames, leftObjectPoints, leftImagePoints ...
(more)
edit retag flag offensive close merge delete

Comments

1

Keeping constant distance to the chessboard is not necessary nor recommended. Perhaps you did not set the chessboard square size correctly? I can't see this in your code...

Witek gravatar imageWitek ( 2019-10-02 18:42:03 -0600 )edit

Yes! I was wondering the same thing, but I don't see how to set it python.

eric_engineer gravatar imageeric_engineer ( 2019-10-02 19:18:31 -0600 )edit
2

I am not sure (I don't use Python) but I think your results are now expressed in square size units (so f=100 means your focal length is 100 square side lengths long) and if you want it in your own units (like mm), measure the square side length in these particular units and append it at the end of this line like so:

OBJECT_POINT_ZERO[:, :2] = np.mgrid[0:7,0:7].T.reshape(-1,2) * square_size
Witek gravatar imageWitek ( 2019-10-03 04:44:21 -0600 )edit
1

I see your point about units, but doing this: OBJECT_POINT_ZERO[:, :2] = np.mgrid[0:7,0:7].T.reshape(-1,2) * 26.32 gave me the same results as before. I'm still missing something because a square is 26.32mm on a side, and fx = 1.10590948e+03. That's thousands of times bigger than the actual focal length.

eric_engineer gravatar imageeric_engineer ( 2019-10-03 07:36:44 -0600 )edit

Another mistake I might have made. I took these pictures at 640x360 while the native resolution of the camera is 4256x3168

eric_engineer gravatar imageeric_engineer ( 2019-10-03 08:05:35 -0600 )edit
1

I made a mistake above - your focal length will not be in mm but always in pixels - my bad. I have no idea how I managed to write that nonsense above. Square size does not matter when it comes to calculating the lens parameters. It will affect extrinsic parameters only.

A focal length of a standard camera is usually close to the image width, so a value of 1105 is pretty normal, however since your image is 640x360 it does seem twice as big. But perhaps you have a narrow angle lens? How exactly did you calculate your focal length?

Witek gravatar imageWitek ( 2019-10-03 14:10:44 -0600 )edit
1

Thanks for the follow up, I'm still messing around trying to get this right. I have the full datasheet for the camera and the focal length is 3.69. It is autofocus, but I turned that off for this test (worst case movement of AF motor is 0.45mm). Pixel size is 1.12um. So I just multiplied that times the returned focal length in pixels. (oh and the sensor is 4224 x 3136 but I reduced the image size I captured to reduce processing requirements for the mobile device).

eric_engineer gravatar imageeric_engineer ( 2019-10-03 14:28:15 -0600 )edit
1

Just thinking.... 640x360 ratio is clearly different from 4224x3136. How is the small image created? It cannot be the scaled version of the entire image so we cannot take the entire sensor width into consideration....

Witek gravatar imageWitek ( 2019-10-03 16:53:58 -0600 )edit
1

I specify the image size in my request to the android camera driver, then the camera or the ISP crops and returns it. I realized today that 640x360 is a hold over from my other camera, a 1280x720 one. So I can change 640x360 to say 640x480 to preserve the 4:3 aspect ratio. But I hear what you're saying, sensor width would not be the same or maybe I have to consider the pixels to be larger.

eric_engineer gravatar imageeric_engineer ( 2019-10-03 17:34:36 -0600 )edit

I guess if I think about it some more, if I change the resolution lower, but the FOV remains the same, then it's almost as if the focal point must virtually move forward.

eric_engineer gravatar imageeric_engineer ( 2019-10-03 20:07:20 -0600 )edit