Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Calibration between thermal and visible camera

I am trying to find the relationship between a thermal and visible camera for subsequent data fusion. I am using a plastic board (4mm Delrin plate) machined using CNC to have a symmetric 7 x 5 circle grid. This calibration plate is then set placed infront of a heated monitor as a backdrop. In this way images of the calibration board as visible to both cameras as shown in the figures. I am stuck in the intrinsic calibration step. As for the thermal camera, calibration using the OpenCV works. For the visible camera (Intel RealSense SR300), the circles are correctly detection during feature identification and the calibration succeeds using the same opencv code however when undistorting the image, it is highly deformed example as shown. Any ideas ? Is this happening because of the slight change in depth between the plate and the monitor, not being as planar as if it was a cardboard ?

Calibration between thermal and visible camera

I am trying to find the relationship between a thermal and visible camera for subsequent data fusion. I am using a plastic board (4mm Delrin plate) machined using CNC to have a symmetric 7 x 5 circle grid. This calibration plate is then set placed infront of a heated monitor as a backdrop. In this way images of the calibration board as visible to both cameras as shown in the figures. I am stuck in the intrinsic calibration step. As for the thermal camera, calibration using the OpenCV works. For the visible camera (Intel RealSense SR300), the circles are correctly detection during feature identification and the calibration succeeds using the same opencv code however when undistorting the image, it is highly deformed example as shown. shown (https://i.stack.imgur.com/HzcFg.jpg). Any ideas ? Is this happening because of the slight change in depth between the plate and the monitor, not being as planar as if it was a cardboard ?