calibrateCamera Expectations

asked 2016-08-13 18:24:21 -0500

rmaw gravatar image

updated 2016-08-14 06:58:20 -0500


I am just trying out opencv, specifically calibrateCamera, and I am attempting to get a baseline for my expectations when using this to calibrate real cameras. S I have generated images using 3d software, and rendered posed checkerboards such as the one below. I have the the checkerboard at the origin, with the centre matching that of OpenCV (This seems to be 1 row in from the top and left that OpenCV considers the checkerboard centre to be). Everything is scaled correctly in terms of the object points I am supply to calibrateCamera.

image description

Now I know the cameras has no distortion, so I am using the following flags:


I am fixing the principle point to be at image.w/2 image.h/2.

I know the virtual camera in my software is translated to 10.000 -4.000 20.000. My software has y up, z towards the camera so these translate to 10.000, 4.000, -20.000 in terms of the opencv coordinates with calibrateCamera.

I have played alot and found that large resolutions (upwards of 2,500 pixels wide) are giving a much better result for tvecs. I am calcuating the translation of my camera from tvecs with the following

mat =  cv2.Rodrigues(rvecs[0])[0]
mat = mat.T
camTranslation =, tvecs[0])

which is giving this result,

Translation 9.53675668, 3.86981952 , -19.19919903

this is close, but still relatively far off the 10.000, 4.000, -20.000 I was expecting.

calibrateCamera returns a reprojection error of 1.265... on a 4096 pixel wide image. (I assume the reproj error is in terms of pixels, so this doesn't seem to bad)

I am basically wondering why am I not getting something far closer to that actual translation of this setup. I was expecting to see perhaps 0.1 off, ideally less, but this is considerably more for an image that is of far higher quality than something I will have in practice with a real camera.

Side note:

I noticed an the opencv googled image of a checkerboard that I am using (I think its an offical one) has imperfect corners on the checkerboard. is this by design ?

I'd be very interested to hear any advice on the points above from someone who has experience with cameraCalibration, and some guidance as to what I should realistically expect in terms of getting accurate extrinsic matricies of the camera. Also any advice on things to try to improve results.

I assume with many cameras a bundle adjustment step would improve on this? but should I be able to achieve a more accurate extrinsic matrix from this stage?

Thanks in advance!

edit retag flag offensive close merge delete


There should not be an overlap between the squares. It should be as perfect a corner as possible. That may be the problem with your image. I can't tell if that's a picture of your chess board or not.

Tetragramm gravatar imageTetragramm ( 2016-08-13 20:53:08 -0500 )edit

Also, that image is anti-aliased in a way real images rarely are. Here's a picture from a real camera. You can see that the corners look very different from your image. Maybe try slightly blurring it? A 3x3 gaussian maybe?

Tetragramm gravatar imageTetragramm ( 2016-08-13 20:57:28 -0500 )edit

Great, thanks. I have actually had better results with some 2k resolution images and increasing to 40x30 tiles in the board, with a fresh image I made myself in photoshop. Additionally increasing the window size for cornerSubPix has helped.

This has yielded a translation of 13.99863005, 10.00042285, -49.98512976 where the expected translation was 14,10,-50. This is much better.

Realistically with a decent camera, how likely am i to achieve a comparable result to this is terms of accuracy with just calibrateCameras. Obviously a real camera with imperfect setup is alot hard to measure the absolute errors I am getting.

many thanks

rmaw gravatar imagermaw ( 2016-08-13 21:40:28 -0500 )edit

I've only calibrated a few cameras, but I'd say you can expect about 1-2 pixel average reprojection error. Mind, that's with distortion and over a set of 100 images, so one bad detection could throw that way off. Also, I'm using a charuco board printed on paper, which isn't necessarily flat as assumed.

Basically, you can expect errors of up to whatever 2 pixels is at the range of the board. So that depends on the FOV of the camera, the range, and the resolution of the image. That should be nearly the upper bound on your error though. Errors should on average cancel out, so it may be 5 pixels off, but one point will be off to the left, and the other right and they final error is less.

Also, if you're not estimating the camera matrix at the same time fewer ...(more)

Tetragramm gravatar imageTetragramm ( 2016-08-13 23:04:51 -0500 )edit