Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Understanding Calibration

As far as I have understood, the calibration of a camera lens is done, so that the image sensor data distortion can be rectified before being stored as a file. I can think of intensity as one of the sensor data distortion parameter, that could be rectified, because the intensity of the captured pixels might not be the same as in the reality for that particular lens. What are the other factors that are calibrated? I see everywhere, that a chess board is taken for calibration. But I could never understand how a chess board can help calibrate the camera? Moeverover, the rectification should be done before the image is stored as a file. Even if we know the distortion coefficients etc. beforehand, what is the use of them because the image is already stored as a file by the software running in the camera. Or is it that the calibration is done so that the rectification can be done later by the user himself? This is all about a single camera setup.

Is there any more parameters that needs to be looked, when we have multiple cameras in the setup? I can think of a setup where multiple cameras are used, where the images would be overlapped with each other - like a sensor fusion? Ofcourse, individual cameras needs to be calibrated first. But what about in between cameras? What kind of parameters are important here? I have read something about translational, rotational and projection matrix. How are these calculated and used in the rectification of the image after the image has been captured by the cameras?