Calibration between thermal and visible camera

asked 2018-06-12 03:10:30 -0600

Ael gravatar image

updated 2018-06-12 03:24:30 -0600

I am trying to find the relationship between a thermal and visible camera for subsequent data fusion. I am using a plastic board (4mm Delrin plate) machined using CNC to have a symmetric 7 x 5 circle grid. This calibration plate is then set placed infront of a heated monitor as a backdrop. In this way images of the calibration board as visible to both cameras as shown in the figures. I am stuck in the intrinsic calibration step. As for the thermal camera, calibration using the OpenCV works. For the visible camera (Intel RealSense SR300), the circles are correctly detection during feature identification and the calibration succeeds using the same opencv code however when undistorting the image, it is highly deformed example as shown (https://i.stack.imgur.com/HzcFg.jpg). Any ideas ? Is this happening because of the slight change in depth between the plate and the monitor, not being as planar as if it was a cardboard ?

edit retag flag offensive close merge delete

Comments

"it is highly deformed example as shown." ?

How many images are used to calibrate one camera ?

LBerger gravatar imageLBerger ( 2018-06-12 03:22:11 -0600 )edit

i tried using several sets of images ranging from 5, 6, 10, 15 and 20

Ael gravatar imageAel ( 2018-06-12 03:23:23 -0600 )edit
1

10 images (for one camera) it's enough but grid must be moved in all image boundary and it's a good idea to change grid orientation.

LBerger gravatar imageLBerger ( 2018-06-12 03:28:18 -0600 )edit