Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

triangulate 3d points from stereo images?

I have a stereo pair of images, downloaded from a dataset along with the camera calibration data.

I am using the camera projection matrices, which look like this:

6.452401 0.000000 6.359587 0.000000 
0.000000 6.452401 1.941291 0.000000
0.000000 0.000000 1.000000 0.000000


6.452401 0.000000 6.359587 -3.682632 
0.000000 6.452401 1.941291 0.000000 
0.000000 0.000000 1.000000 0.000000

I have feature matched keypoints on both frames, which I then run through triangulatepoints, like this:

vector<cv::Point2f> points2dLeft, points2dRight;

cv::Mat points_3D(1, 5, CV_64FC4);


for (std::vector<cv::DMatch>::const_iterator it = good_matches.begin(); it != good_matches.end(); ++it)
{

    float x = keypoints1[it->queryIdx].pt.x;
    float y = keypoints1[it->queryIdx].pt.y;
    points2dLeft.push_back(cv::Point2f(x, y));

    x = keypoints2[it->trainIdx].pt.x;
    y = keypoints2[it->trainIdx].pt.y;
    points2dRight.push_back(cv::Point2f(x, y));
}


cv::triangulatePoints(P1, P2, points2dLeft2, points2dRight2, points_3D);

And it gives me terrible results: (the images are http://www.cvlibs.net/datasets/karlsruhe_sequences/)

 -0.994567 -0.0603811 -0.0134074
 -0.986978 -0.133252 -0.0286039
 -0.995871 -0.0881813 -0.0167059
 -0.99789 -0.0591445 -0.0112049
 -0.98817 -0.132973 -0.0241383
 -0.990287 -0.134424 -0.0240541...

I have also tried the method here:

http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/

which gives different,, but just as awful results. Has anyone successfully done this and created an accurate point cloud?

triangulate 3d points from stereo images?

I have a stereo pair of images, downloaded from a dataset along with the camera calibration data.

I am using the camera projection matrices, which look like this:

6.452401 0.000000 6.359587 0.000000 
0.000000 6.452401 1.941291 0.000000
0.000000 0.000000 1.000000 0.000000


6.452401 0.000000 6.359587 -3.682632 
0.000000 6.452401 1.941291 0.000000 
0.000000 0.000000 1.000000 0.000000

I have feature matched keypoints on both frames, which I then run through triangulatepoints, like this:

vector<cv::Point2f> points2dLeft, points2dRight;

cv::Mat points_3D(1, 5, CV_64FC4);


for (std::vector<cv::DMatch>::const_iterator it = good_matches.begin(); it != good_matches.end(); ++it)
{

    float x = keypoints1[it->queryIdx].pt.x;
    float y = keypoints1[it->queryIdx].pt.y;
    points2dLeft.push_back(cv::Point2f(x, y));

    x = keypoints2[it->trainIdx].pt.x;
    y = keypoints2[it->trainIdx].pt.y;
    points2dRight.push_back(cv::Point2f(x, y));
}


cv::triangulatePoints(P1, P2, points2dLeft2, points2dRight2, points_3D);

And it gives me terrible results: (the images These are http://www.cvlibs.net/datasets/karlsruhe_sequences/)

 -0.994567 -0.0603811 -0.0134074
 -0.986978 -0.133252 -0.0286039
 -0.995871 -0.0881813 -0.0167059
 -0.99789 -0.0591445 -0.0112049
 -0.98817 -0.132973 -0.0241383
 -0.990287 -0.134424 -0.0240541...
the 2d points:

http://s000.tinyupload.com/download.php?file_id=00838533751330310698&t=0083853375133031069899331 http://s000.tinyupload.com/download.php?file_id=88623312011012288167&t=8862331201101228816795940

and this is the camera data:

http://www.mrt.uni-karlsruhe.de/geigerweb/cvlibs.net/karlsruhe_sequences/2010_03_09_calib.txt

I have also tried the method here:

http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/

which gives different,, but just as awful results. Has anyone successfully done this and created an accurate point cloud?