Ask Your Question
0

how many images for camera calibration?

asked 2014-08-28 07:06:37 -0600

kuros gravatar image

Hi, I finally manged to get camera calibration working with relatively reliable results (using a set of 10 standard chessboard images). However the choice of the orientation of the chess board seems rather ad hoc. Does anyone in this forum know of a minimal set of standard orientations (e.g. n=(0, 0, 1), (0, -1, 1), (-1, 0,1) ...) which would result in a stable camera calibration? In a controlled environment this ought be possible? In the provided examples it seems like people hold the board in front of the camera and hope for the best.

Thank you

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-08-28 11:31:17 -0600

I normally use around 20-30 images. I got the best results by moving the pattern to the edge of the image since the distortion can be captures best far away from the optical center of the image. In the end I use three distances to the camera (very close, midrange and far away), try to sample the edges of the image and use several angles to the camera.

edit flag offensive delete link more

Comments

of course according to Zhang's paper it seems more samples the better the results. In fact according to the data presented there (if I remember correctly) there are dramatic improvements with 5+ images. However in many tests I got a much lower reprojection error rate (~.2 or .3) when I used 5-10 images versus 20-30. In my view it ought be possible for calibration in a controlled environment to use images taken from a minimal set of fixed orientations to get a reliable result. Instead of the adhoc approach currently. That needs re investigating the mathematical derivation. As far as near or far camera positions relative to the chess board plane: I got my best results (better undistortion) when all the pictures were taken with the camera closest to the plane. Thanks.

kuros gravatar imagekuros ( 2014-08-29 04:50:59 -0600 )edit

But the reprojection error is maybe not the best indicator. If you just use one image, you can get a perfect reprojection error just for this image. It would be interesting to do the standard machine learning approach and divide your images into training and test datasets. You could use some of the images to compute the intrinsic values of your camera. In a second step you try to find find the pattern in the test-images using these intrinsic values via solvePnP. A good calibration would have a small reprojection error in both cases. If you have to little images in the training phase, you could suffer from overfitting which would result in much larger errors in the test phase.

FooBar gravatar imageFooBar ( 2014-08-29 05:04:07 -0600 )edit

Question Tools

Stats

Asked: 2014-08-28 07:06:37 -0600

Seen: 2,298 times

Last updated: Aug 28 '14