Ask Your Question
0

Understanding Calibration

asked 2017-07-20 03:47:58 -0600

infoclogged gravatar image

As far as I have understood, the calibration of a camera lens is done, so that the image sensor data distortion can be rectified before being stored as a file. I can think of intensity as one of the sensor data distortion parameter, that could be rectified, because the intensity of the captured pixels might not be the same as in the reality for that particular lens. What are the other factors that are calibrated? I see everywhere, that a chess board is taken for calibration. But I could never understand how a chess board can help calibrate the camera? Moeverover, the rectification should be done before the image is stored as a file. Even if we know the distortion coefficients etc. beforehand, what is the use of them because the image is already stored as a file by the software running in the camera. Or is it that the calibration is done so that the rectification can be done later by the user himself? This is all about a single camera setup.

Is there any more parameters that needs to be looked, when we have multiple cameras in the setup? I can think of a setup where multiple cameras are used, where the images would be overlapped with each other - like a sensor fusion? Ofcourse, individual cameras needs to be calibrated first. But what about in between cameras? What kind of parameters are important here? I have read something about translational, rotational and projection matrix. How are these calculated and used in the rectification of the image after the image has been captured by the cameras?

edit retag flag offensive close merge delete

Comments

1

Distortion in optics here. Intensity issue (e.g. chromatic aberration) is not treated in the OpenCV camera calibration process. For intensity treatment, see raw image format and this is the job of image editing software like DxO OpticsPro or Photoshop.

OpenCV is a computer vision library intended for uses mainly in robotics and related domains.

Eduardo gravatar imageEduardo ( 2017-07-20 06:25:01 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-07-20 07:24:41 -0600

The calibration of a camera lens is done, so that the image sensor data distortion can be rectified before being stored as a file

Nop. Undistorting an image may have several applications. Distortions occur so that cameras can achieve bigger fields of view. But when you are performing image processing, normally you want no distortions in the image. Example: Running a face detection algorithm will work better in an undistorted image.

I see everywhere, that a chess board is taken for calibration. But I could never understand how a chess board can help calibrate the camera?

To put it simply, chessboards are used to infer distortion coefficients and other camera intrinsic parameters. Since the chessboard square sizes are known, and their relative positions are also known, there are algorithms that can calculate the transformations that the lens made to bring the chessboard from its normal appearance to the distorted one. Undistorting is just reverting these transformations.

Or is it that the calibration is done so that the rectification can be done later by the user himself?

Yes. Since normally one doesn't get these parameters from camera sellers, you need to calibrate them if you want to use the undistorted images.

Is there any more parameters that needs to be looked, when we have multiple cameras in the setup?

Depends on the application. For example, if you find an object in Camera A and then want to look at that place in Camera B, you need to know the transformation between A and B (the translation and rotation matrix).

If you want to stitch two images in the overlaping area to create a panorama image, you need to find the common points on both images to calculate the homography and use it to transform one of the image to the plane of the other.

edit flag offensive delete link more

Comments

wrt to the chess board: you mean there is one perfect chess board image and then there is an image taken by a camera ( lens ) that has some distortions - rounded corners for example. The distortion coefficients can be calculated by comparing the two images by a calibration algorithm. Is my understanding correct?

infoclogged gravatar imageinfoclogged ( 2017-07-20 08:14:58 -0600 )edit

There is no image of the "perfect" chessboard, just the assumptions of how a "perfect" chessboard should be in relation of how the chessboard "is" in the image.

Look at this image: As you can see, the chessboard lines are not straight, it is clear that there is distortion. What the calibration does is, given the real dimensions of the squares in the chessboard, calculate what transformation in the image should happen to revert it to being undistorted.

Pedro Batista gravatar imagePedro Batista ( 2017-07-20 08:43:43 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-07-20 03:47:58 -0600

Seen: 638 times

Last updated: Jul 20 '17