OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 08 Jan 2015 04:00:50 -0600Given a pair of stereo-calibrated cameras and a set of 2D point correspondences, what would be a proper way to obtain 3D coordinates of those points through triangulation?http://answers.opencv.org/question/52579/given-a-pair-of-stereo-calibrated-cameras-and-a-set-of-2d-point-correspondences-what-would-be-a-proper-way-to-obtain-3d-coordinates-of-those-points/I have a set of two similar cameras pointing in roughly the same direction with a baseline of about 10cm. I have calibrated each separately using the calibrateCamera function, and then used obtained data to calibrate them as a stereo pair using stereoCalibrate.
Now, in my application I'm loading obtained calibration data, after which I calculate needed transforms using stereoRectify and then initUndistortRectifyMap (twice):
Mat ML, MR, DL, DR, R, T, F;
Mat RL, RR, PL, PR, Q;
Mat mapL1, mapL2, mapR1, mapR2;
FileStorage fs(CALIB_FILE, FileStorage::READ);
fs["ML"] >> ML;
fs["MR"] >> MR;
fs["DL"] >> DL;
fs["DR"] >> DR;
fs["R"] >> R;
fs["T"] >> T;
fs["F"] >> F;
fs["img_size"] >> img_size;
fs.release();
stereoRectify(ML, DL, MR, DR, img_size,
R, T, RL, RR, PL, PR, Q, CALIB_ZERO_DISPARITY, 0);
initUndistortRectifyMap(ML, DL, RL, PL, img_size, CV_16SC2, mapL1, mapL2);
initUndistortRectifyMap(MR, DR, RR, PR, img_size, CV_16SC2, mapR1, mapR2);
And I'm transforming image from the same set of cameras with remap.
cap_l >> left;
cap_r >> right;
remap(left, left, mapL1, mapL2, INTER_LINEAR);
remap(right, right, mapR1, mapR2, INTER_LINEAR);
Given the remapped (rectified/undistorted) images, I'm finding some 2D point-point correspondences on them (currently I just use findChessboardCorners) and then I use correctMatches and triangulatePoints to obtain the positions of points in 3D homogeneous space. Lastly, I divide the vectors by 4-th coordinate and save to file.
Mat pnts4D(1, 12, CV_64FC4);
Mat l, r;
correctMatches(F, pointbuf_l, pointbuf_r, l, r);
triangulatePoints(PL, PR, l, r, pnts4D);
FILE* f = fopen("foo.dat", "w");
for (int i = 0; i < pnts4D.cols; ++i) {
Mat col = pnts4D.col(i);
float x, y, z, w;
w = col.at<float>(3, 0); x = col.at<float>(0, 0) / w;
y = col.at<float>(1, 0) / w; z = col.at<float>(2, 0) / w;
fprintf(f, "%f %f %f\n", x, y, z);
}
fclose(f);
After running this program for a set of co-planar points, in the saved file I observe them all roughly on the same plane, but with slight deformation. Example data set (can be plotted with gnuplot: splot 'foo.dat' u 1:2:3 w p):
0.631341 -1.571493 10.095419
0.541732 -1.081704 9.808824
0.418602 -0.541069 9.344850
-0.192193 -1.637166 10.602867
-0.322056 -1.134386 10.287917
-0.489270 -0.575159 9.692506
-1.081642 -1.702032 11.066906
-1.246805 -1.174097 10.666016
-1.440508 -0.615120 10.058439
-1.994492 -1.803143 11.538129
-2.193148 -1.235700 11.008035
-2.407472 -0.669747 10.389552
So I'm wondering if the imperfections are caused simply by calculation errors in each step (and perhaps errors in finding points on the image, quantization errors etc.), or if there is an error in my method.
Nezumi-samaThu, 08 Jan 2015 04:00:50 -0600http://answers.opencv.org/question/52579/