Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Camera Matrix with Extra Optical Elements

Hello,

I am using a camera sensor with a fixed focal length lens that has a distortion on it. Normally I would use the OpenCV chessboard routine as is to correct for it, but in this case I have extra optical elements. The camera goes through a mirror and then a reflection prism that splits the image. Each split image then goes through its own mirror. Essentially this results in three regions: a left region of interest that corresponds to one of the split images, a right region of interest that corresponds to the other split image, and some dead space in between that I can't detect. My question is how should the camera matrix be obtained?

  1. With the camera alone without any optical elements aside from the lens. Perform the chessboard test, obtain the camera matrix, and use this matrix after attaching the camera back to the rest of the system.
  2. Obtain a separate camera matrix for the left and right region of interest, after the optical elements. I believe theoretically the left and camera matrix should be the same, but there may be slight differences due to alignment issues of the optics and/or camera. But is it valid to do a chessboard test to obtain a camera matrix if center of the image does not correspond with the center of the camera?
  3. Some other method.

I don't think the reflection mirrors should be causing too much optical aberration.

Thanks in advance.

Camera Matrix with Extra Optical Elements

Hello,

I am using a camera sensor with a fixed focal length lens that has a distortion on it. Normally I would use the OpenCV chessboard routine as is to correct for it, but in this case I have extra optical elements. The camera goes through a mirror and then a reflection prism that splits the image. Each split image then goes through its own mirror. Essentially this results in three regions: a left region of interest that corresponds to one of the split images, a right region of interest that corresponds to the other split image, and some dead space in between that I can't detect. My question is how should the camera matrix be obtained?

  1. With the camera alone without any optical elements aside from the lens. Perform the chessboard test, obtain the camera matrix, and use this matrix after attaching the camera back to the rest of the system.
  2. Obtain a separate camera matrix for the left and right region of interest, after the optical elements. I believe theoretically the left and camera matrix should be the same, but there may be slight differences due to alignment issues of the optics and/or camera. But is it valid to do a chessboard test to obtain a camera matrix if center of the image does not correspond with the center of the camera?
  3. Some other method.

I don't think the reflection mirrors should be causing too much optical aberration.

Thanks in advance.

::::::::::::::::::EDIT:::::::::::::::::

So as a point of comparison, I did a test using the first two approaches I mentioned above and here are the resulting intrinsic matrices I obtained from the data.


Calibration using only camera without optical elements:

mtx = [[ 1.53451091e+03 0.00000000e+00 6.37547946e+02]

[ 0.00000000e+00 1.53575661e+03 2.03955413e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -2.90082167e-01 -1.25493796e+00 -6.68277712e-04 5.48729228e-03 3.24470291e+00]]


Left region calibration with optical elements:

mtx = [[ 1.63242750e+03 0.00000000e+00 7.04154505e+02]

[ 0.00000000e+00 1.62893516e+03 2.66533979e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -5.39250573e-01 6.71474407e+00 -7.75939341e-03 3.58186934e-03 -5.96964357e+01]


Right region calibration with optical elements:

mtx = [[ 1.56749908e+03 0.00000000e+00 3.84091853e+02]

[ 0.00000000e+00 1.55558685e+03 1.83966722e+02]

[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]

dist = [[ -5.01140894e-01 9.92103465e+00 -1.17531442e-02 -1.02322088e-02 -1.86869431e+02]]

So some key points:

  1. There are some differences in the focal lengths measured in each case. Probably has to do with the alignment of the optical elements.
  2. The rightmost column of the camera matrix is different in each case. This makes sense, because when the camera is calibrated on its own, it has a whole image to work with. But when the camera goes through the splitting optics, in each case there is only half an image. Though since the image is split across the middle, I would have expected the y-coordinate (2nd row, 3rd column) to be the same for the left and right camera matrices.
  3. The distortion coefficients are different. Though I think it's hard to compare in the different cases. Could simply be that the different regions have different distortion factors?