2015-01-16 02:08:57 -0600 | received badge | ● Enthusiast |
2015-01-08 04:50:02 -0600 | received badge | ● Student (source) |
2015-01-08 04:06:42 -0600 | asked a question | Given a pair of stereo-calibrated cameras and a set of 2D point correspondences, what would be a proper way to obtain 3D coordinates of those points through triangulation? I have a set of two similar cameras pointing in roughly the same direction with a baseline of about 10cm. I have calibrated each separately using the calibrateCamera function, and then used obtained data to calibrate them as a stereo pair using stereoCalibrate. Now, in my application I'm loading obtained calibration data, after which I calculate needed transforms using stereoRectify and then initUndistortRectifyMap (twice): And I'm transforming image from the same set of cameras with remap. Given the remapped (rectified/undistorted) images, I'm finding some 2D point-point correspondences on them (currently I just use findChessboardCorners) and then I use correctMatches and triangulatePoints to obtain the positions of points in 3D homogeneous space. Lastly, I divide the vectors by 4-th coordinate and save to file. After running this program for a set of co-planar points, in the saved file I observe them all roughly on the same plane, but with slight deformation. Example data set (can be plotted with gnuplot: splot 'foo.dat' u 1:2:3 w p): So I'm wondering if the imperfections are caused simply by calculation errors in each step (and perhaps errors in finding points on the image, quantization errors etc.), or if there is an error in my method. |