Stereo Vision With Fisheye and FOV camera
Hi,
I have two stereo cameras with different focal length and a baseline of roughly 10cm. I performed an in- and extrinsic calibration using a tool based on Bouguets algorithm under http://www.vision.caltech.edu/bouguetj/calib_doc/
Here are the output I got:
Intrinsic parameters of left camera (HD):
Focal Length: fc_left = [ 941.65838 941.99357 ] � [ 1.19499 1.55626 ]
Principal point: cc_left = [ 643.69533 348.60145 ] � [ 1.28309 1.52477 ]
Skew: alpha_c_left = [ 0.00000 ] � [ 0.00000 ] => angle of pixel axes = 90.00000 � 0.00000 degrees
Distortion: kc_left = [ 0.17235 -0.30426 0.00261 0.00016 0.00000 ] � [ 0.00382 0.00795 0.00066 0.00056 0.00000 ]
Pixel error: err = [ 0.14001 0.15865 ]
Intrinsic parameters of right camera (Wide Angle):
Focal Length: fc_right = [ 275.62703 275.32800 ] � [ 1.56095 1.91438 ]
Principal point: cc_right = [ 322.41626 234.36411 ] � [ 2.46595 2.00803 ]
Skew: alpha_c_right = [ 0.00000 ] � [ 0.00000 ] => angle of pixel axes = 90.00000 � 0.00000 degrees
Distortion: kc_right = [ -0.30762 0.11325 -0.00185 -0.00070 0.00000 ] � [ 0.00756 0.00923 0.00092 0.00115 0.00000 ]
Extrinsic parameters (position of right camera wrt left camera):
Rotation vector: om = [ -0.04394 0.00216 -0.00449 ]
Translation vector:
T = [ -108.66272 -0.94249 -6.31156 ].
I then put the data into opencvs functions
->stereoRectify( ..., CALIB_USE_INTRINSIC_GUESS, -1 , ... )
->initUndistortRectifyMap()
-> remap(... INNTER_LINEAR) In case you wonder why the fisheye image is still so much distorted at the outer parts, that is because I only calibrated the common viewing area of the two cameras. to get the rectified version of left and right hand side images. Here is what the output looks like:
When I feed the rectified images then into the Stereo block matcher (StereoBM()) I do do not manage to get a good depth map from that. Also with other images I fail getting good results with this setup. Therefore my questions are:
- What can I expect from the rectification? Is that a good result or should the images be aligned with pixel accuracy over the whole scene?
- What can I do to get good results for my stereo application - or what are the reasons for the bad performance of my code (Is it possible to combine the Fisheye camera and a "normal" field of view camera in a stereo setup with good results)?
- Due to the difference in the focal length and the image size, I can only use a small part of the fisheye image. Any Ideas how I could make the best out of it (e.g. Interpolation with data from right hand side image?)?
Cheers!