2020-08-14 02:22:08 -0600 | received badge | ● Notable Question (source) |
2017-04-19 08:48:37 -0600 | received badge | ● Popular Question (source) |
2014-07-10 11:28:30 -0600 | received badge | ● Nice Question (source) |
2013-10-04 15:17:14 -0600 | received badge | ● Student (source) |
2013-10-03 20:44:18 -0600 | received badge | ● Editor (source) |
2013-10-03 20:41:48 -0600 | asked a question | Stereo Matching/Calibration Help Hello, I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map generated but very poor point-clouds, that seem skewed/squished and are not representative of the actual scene. What I have done so far: Used the OpenCV stereo_calibration and stereo_matching examples to: 1) Calibrate my stereo camera using chess board images What I have done so far as elimination towards my problem:
What I suspect the problem is: My disparity image looks relatively acceptable, but the next step is to go to 3D Point cloud using the Q matrix. I suspect, I am not calibrating the cameras correctly to generate the right Q matrix. Unfortunately, I've hit the wall in terms of thinking what else I can do to get a better Q matrix. Can someone please suggest ways ahead? The other thing that I think might be problematic is the assumptions I am making when using the cv::stereoCalibrate function. For the moment, I individually calibrate each camera to get the camera and distortion (cameraMatrix[0], distCoeffs[0] and cameraMatrix[1], distCoeffs[1]) matrices so it makes the complexity for the stereoCalibrate function a little easier. Additionally, I think it might be useful to mention how I am going from disparity to point cloud. I am using OpenCV's cv::reprojectImageTo3D and then writing the data to a PCL Point cloud structure. Here is the relevant code: (more) |