Hello,
I am working on a 3 camera setup, where Camera1 is calibrated to Camera 2 and Camera3 is calibrated to Camera2. extrinsic: [Cam1, Cam2], [Cam3, Cam2]. Every Camera is also calibrated intrinsic before, with feature points extracted from a calibration board.
Cam1 and Cam2 are equal RGB cameras (Logitech C920), where Cam3 has different lensdistortion, resolution, etc. (Gobi-640-GigE)
I want to take a picture in a distance of 0.3 - 0.4m. I can move the cameras on a area of 0.4m x 0.4m in my setup. All cameras are pointing down from above and are placed inside a box. The position (X,Z), angle and tilt of the cameras can be changed, the height is determined by the ROI.
The idea is to take a picture with Cam1 and Cam2, use triangulation to estimate the position in 3D space and then project the point into the view space of Cam3.
I can calibrate them and get also a more or less good values, but I would like to improve them.
0.4 < ExtrinsicError([Cam1, Cam2]) < 0.6
1.2 < ExtrinsicError([Cam3, Cam2]) < 2.0
The mapping of the points works pretty good in the center of the image, but the quality gets very bad at the egde of the image (around 50% from the center, the error is more then 5 pixels). I could archive one calibration, by testing all calibration parameters, where the error was around 3 pixels (at 50% away from the center), but it was a lucky shot. The extrinsic error is not corresponding to the accuracy at the lower error bound (e.g. similar extrinsic error, but reprojection varies in 5 or 10 pixels from the original, evaluated by manual point comparision)
At the moment all 3 cameras are mounted on the same plane in a line, where Cam3 is in the center of Cam1 and Cam2. Cam1 and Cam2 are 15cm apart and they are focused/aligned on the center of my ROI.
Are there any heuristics how the cameras should by alligned or positioned? I was already looking for papers and googled my problem, but could not find any satisfying answer. The OpenCV documentation does also not state anything to the hardware configuration. What angle should the cameras have to archive the best results in the reprojection into Cam3. Is a parallel positioning better then a tilted setup?