Camera Matrix with Extra Optical Elements

asked 2016-04-26 09:25:38 -0600

solarflare gravatar image

updated 2016-04-28 13:46:58 -0600


I am using a camera sensor with a fixed focal length lens that has a distortion on it. Normally I would use the OpenCV chessboard routine as is to correct for it, but in this case I have extra optical elements. The camera goes through a mirror and then a reflection prism that splits the image. Each split image then goes through its own mirror. Essentially this results in three regions: a left region of interest that corresponds to one of the split images, a right region of interest that corresponds to the other split image, and some dead space in between that I can't detect. My question is how should the camera matrix be obtained?

  1. With the camera alone without any optical elements aside from the lens. Perform the chessboard test, obtain the camera matrix, and use this matrix after attaching the camera back to the rest of the system.
  2. Obtain a separate camera matrix for the left and right region of interest, after the optical elements. I believe theoretically the left and camera matrix should be the same, but there may be slight differences due to alignment issues of the optics and/or camera. But is it valid to do a chessboard test to obtain a camera matrix if center of the image does not correspond with the center of the camera?
  3. Some other method.

I don't think the reflection mirrors should be causing too much optical aberration.

Thanks in advance.


So as a point of comparison, I did a test using the first two approaches I mentioned above and here are the resulting intrinsic matrices I obtained from the data.

Calibration using only camera without optical elements:

mtx = [[ 1.53451091e+03 0.00000000e+00 6.37547946e+02]

[ 0.00000000e+00 1.53575661e+03 2.03955413e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -2.90082167e-01 -1.25493796e+00 -6.68277712e-04 5.48729228e-03 3.24470291e+00]]

Left region calibration with optical elements:

mtx = [[ 1.63242750e+03 0.00000000e+00 7.04154505e+02]

[ 0.00000000e+00 1.62893516e+03 2.66533979e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -5.39250573e-01 6.71474407e+00 -7.75939341e-03 3.58186934e-03 -5.96964357e+01]

Right region calibration with optical elements:

mtx = [[ 1.56749908e+03 0.00000000e+00 3.84091853e+02]

[ 0.00000000e+00 1.55558685e+03 1.83966722e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -5.01140894e-01 9.92103465e+00 -1.17531442e-02 -1.02322088e-02 -1.86869431e+02]]

So some key points:

  1. There are some differences in the focal lengths measured in each case. Probably has to do with the alignment of the optical elements.
  2. The rightmost column of the camera matrix is different in each case. This makes sense, because when the camera is calibrated on its own, it has a whole image to work with. But when the camera goes through the splitting optics, in each case there is only half an image. Though since the image ...
edit retag flag offensive close merge delete


I prefer method 2. You can split your image (right and left image) and then you will have two virtual camera to calibrate...

LBerger gravatar imageLBerger ( 2016-04-26 09:34:13 -0600 )edit

Because I am using a micro video lens, I had to scale down the chessboard. For comparison, when I printed the original chessboard ( I measured a square to be about 21 mm. In comparison, the squaresize for the chessboard I have is about 1.03 mm. Unfortunately, I do not see a way to account for this in python, which I am using: In contrast, it looks like the C++ version has a variable called "squareSize": .

Also, from your own experience, what kind of discrepancy did you see in the two camera matrices?

solarflare gravatar imagesolarflare ( 2016-04-27 16:14:55 -0600 )edit