Ask Your Question

rmaw's profile - activity

2016-08-14 06:58:20 -0600 received badge  Editor (source)
2016-08-13 21:40:28 -0600 commented question calibrateCamera Expectations

Great, thanks. I have actually had better results with some 2k resolution images and increasing to 40x30 tiles in the board, with a fresh image I made myself in photoshop. Additionally increasing the window size for cornerSubPix has helped.

This has yielded a translation of 13.99863005, 10.00042285, -49.98512976 where the expected translation was 14,10,-50. This is much better.

Realistically with a decent camera, how likely am i to achieve a comparable result to this is terms of accuracy with just calibrateCameras. Obviously a real camera with imperfect setup is alot hard to measure the absolute errors I am getting.

many thanks

2016-08-13 18:44:26 -0600 asked a question calibrateCamera Expectations

Hello,

I am just trying out opencv, specifically calibrateCamera, and I am attempting to get a baseline for my expectations when using this to calibrate real cameras. S I have generated images using 3d software, and rendered posed checkerboards such as the one below. I have the the checkerboard at the origin, with the centre matching that of OpenCV (This seems to be 1 row in from the top and left that OpenCV considers the checkerboard centre to be). Everything is scaled correctly in terms of the object points I am supply to calibrateCamera.

image description

Now I know the cameras has no distortion, so I am using the following flags:

fl = cv2.CALIB_FIX_K1 | cv2.CALIB_FIX_K2 | cv2.CALIB_FIX_K3 | cv2.CALIB_FIX_K4 | cv2.CALIB_FIX_K5 | cv2.CALIB_ZERO_TANGENT_DIST | cv2.CALIB_FIX_PRINCIPAL_POINT

I am fixing the principle point to be at image.w/2 image.h/2.

I know the virtual camera in my software is translated to 10.000 -4.000 20.000. My software has y up, z towards the camera so these translate to 10.000, 4.000, -20.000 in terms of the opencv coordinates with calibrateCamera.

I have played alot and found that large resolutions (upwards of 2,500 pixels wide) are giving a much better result for tvecs. I am calcuating the translation of my camera from tvecs with the following

mat =  cv2.Rodrigues(rvecs[0])[0]
mat = mat.T
camTranslation = np.dot(-mat, tvecs[0])

which is giving this result,

Translation 9.53675668, 3.86981952 , -19.19919903

this is close, but still relatively far off the 10.000, 4.000, -20.000 I was expecting.

calibrateCamera returns a reprojection error of 1.265... on a 4096 pixel wide image. (I assume the reproj error is in terms of pixels, so this doesn't seem to bad)

I am basically wondering why am I not getting something far closer to that actual translation of this setup. I was expecting to see perhaps 0.1 off, ideally less, but this is considerably more for an image that is of far higher quality than something I will have in practice with a real camera.

Side note:

I noticed an the opencv googled image of a checkerboard that I am using (I think its an offical one) has imperfect corners on the checkerboard. is this by design ?

I'd be very interested to hear any advice on the points above from someone who has experience with cameraCalibration, and some guidance as to what I should realistically expect in terms of getting accurate extrinsic matricies of the camera. Also any advice on things to try to improve results.

I assume with many cameras a bundle adjustment step would improve on this? but should I be able to achieve a more accurate extrinsic matrix from this stage?

Thanks in advance!