OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 08 Jul 2015 14:38:52 -0500triangulate to 3-D on corresponding 2-D pointshttp://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/I have a left and right image of a scene, taken with identical cameras. The cameras were placed fairly far apart, about 135cm, and the difference in the angle of their gaze is maybe 30 degrees. I've calibrated the two cameras independently with asymmetric circles, and the resulting values seem sane and can undistort images in a sane way.
There is an object in the images with known dimensions -- it's a table. By hand, I've identified the x,y pixel coordinates of 8 corresponding key points in each image (6 on the table top plane, 2 below in the table's legs). I know the true 3-D coordinates of those 8 points in the scene because I measured them.
How can I use the 2 camera matrices, 2 distortion vectors, 2 vectors of 8 corresponding 2-D points, 1 vector of 8 corresponding 3-D points to arrive at a formula/algorithm to approximate new 3-D points given their 2-D location in each image? I've been testing by trying to recreate the 3-D location of those 8 points, but I plan to use it on new features in phase 2 of this project.
Here's what I've tried so far.
Attempt #1:
- stereoCalibrate to get rotation and translation between cameras
- stereoRectify to get left and right projection matrices
- triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous
Attempt #2:
- solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation
- get the relative rotation and translation between the cameras by subtraction one rotation vector from the other and one translation vector from the other (yes, this could easily be wrong)
- stereoRectify to get left and right projection matrices
- triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous
Attempt #3:
- solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation
- undistortPoints on the 2-D points
- make a 3x4 projection matrix for left and right as [R | T] (yes, this could easily be wrong but I must have read it somewhere)
- triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous
If I had to guess, I'd say Attempt #1 is the best as it uses the higher level stereoCalibrate and it just so happens that the length of the translation vector is 132cm -- maybe a coincidence, but that is the distance between the cameras.
However, all these attempts (and many minor variations) give 3D answers that seem to be nonsense. For instance, one of the points is given as 50 meters away from the scene. They don't resemble the 3D points used as inputs.
This is my first OpenCV project, so I'm sure I'm doing something foolish. I have done a lot of reading online trying to find an example that works, but nothing yet. I'd really appreciate any guidance.
Sat, 02 Mar 2013 23:02:33 -0600http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/Comment by JoseLuisGT for <div class="snippet"><p>I have a left and right image of a scene, taken with identical cameras. The cameras were placed fairly far apart, about 135cm, and the difference in the angle of their gaze is maybe 30 degrees. I've calibrated the two cameras independently with asymmetric circles, and the resulting values seem sane and can undistort images in a sane way.</p>
<p>There is an object in the images with known dimensions -- it's a table. By hand, I've identified the x,y pixel coordinates of 8 corresponding key points in each image (6 on the table top plane, 2 below in the table's legs). I know the true 3-D coordinates of those 8 points in the scene because I measured them.</p>
<p>How can I use the 2 camera matrices, 2 distortion vectors, 2 vectors of 8 corresponding 2-D points, 1 vector of 8 corresponding 3-D points to arrive at a formula/algorithm to approximate new 3-D points given their 2-D location in each image? I've been testing by trying to recreate the 3-D location of those 8 points, but I plan to use it on new features in phase 2 of this project.</p>
<p>Here's what I've tried so far.</p>
<p>Attempt #1:</p>
<ul>
<li><p>stereoCalibrate to get rotation and translation between cameras</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #2:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>get the relative rotation and translation between the cameras by subtraction one rotation vector from the other and one translation vector from the other (yes, this could easily be wrong)</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #3:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>undistortPoints on the 2-D points</p></li>
<li><p>make a 3x4 projection matrix for left and right as [R | T] (yes, this could easily be wrong but I must have read it somewhere)</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>If I had to guess, I'd say Attempt #1 is the best as it uses the higher level stereoCalibrate and it just so happens that the length of the translation vector is 132cm -- maybe a coincidence, but that is the distance between the cameras.</p>
<p>However, all these attempts (and many minor variations) give 3D answers that seem to be nonsense. For instance, one of the points is given as 50 meters away from the scene. They don't resemble the 3D points used as inputs.</p>
<p>This is my first OpenCV project, so I'm sure I'm doing something foolish. I have done a lot of reading ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=11566#post-id-11566I was unable (my fault) to use the triangulation method from opencv, instead I used this method http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/ Note that P is a 3x3 identity matrix joined to a 3x1 zeros vector([1 0 0 0; 0 1 0 0; 0 0 1 0;] and P1 is the join of the Rotation matrix, and traslation betwen the cameras (R | t). The code need some minor adjusments in that time, I expect work for you.Sat, 13 Apr 2013 14:06:05 -0500http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=11566#post-id-11566Comment by manovel for <div class="snippet"><p>I have a left and right image of a scene, taken with identical cameras. The cameras were placed fairly far apart, about 135cm, and the difference in the angle of their gaze is maybe 30 degrees. I've calibrated the two cameras independently with asymmetric circles, and the resulting values seem sane and can undistort images in a sane way.</p>
<p>There is an object in the images with known dimensions -- it's a table. By hand, I've identified the x,y pixel coordinates of 8 corresponding key points in each image (6 on the table top plane, 2 below in the table's legs). I know the true 3-D coordinates of those 8 points in the scene because I measured them.</p>
<p>How can I use the 2 camera matrices, 2 distortion vectors, 2 vectors of 8 corresponding 2-D points, 1 vector of 8 corresponding 3-D points to arrive at a formula/algorithm to approximate new 3-D points given their 2-D location in each image? I've been testing by trying to recreate the 3-D location of those 8 points, but I plan to use it on new features in phase 2 of this project.</p>
<p>Here's what I've tried so far.</p>
<p>Attempt #1:</p>
<ul>
<li><p>stereoCalibrate to get rotation and translation between cameras</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #2:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>get the relative rotation and translation between the cameras by subtraction one rotation vector from the other and one translation vector from the other (yes, this could easily be wrong)</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #3:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>undistortPoints on the 2-D points</p></li>
<li><p>make a 3x4 projection matrix for left and right as [R | T] (yes, this could easily be wrong but I must have read it somewhere)</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>If I had to guess, I'd say Attempt #1 is the best as it uses the higher level stereoCalibrate and it just so happens that the length of the translation vector is 132cm -- maybe a coincidence, but that is the distance between the cameras.</p>
<p>However, all these attempts (and many minor variations) give 3D answers that seem to be nonsense. For instance, one of the points is given as 50 meters away from the scene. They don't resemble the 3D points used as inputs.</p>
<p>This is my first OpenCV project, so I'm sure I'm doing something foolish. I have done a lot of reading ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=57501#post-id-57501Hi, did you find a solution?
I'm facing a similar problem!Sun, 15 Mar 2015 08:52:19 -0500http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=57501#post-id-57501Comment by themightyoarfish for <div class="snippet"><p>I have a left and right image of a scene, taken with identical cameras. The cameras were placed fairly far apart, about 135cm, and the difference in the angle of their gaze is maybe 30 degrees. I've calibrated the two cameras independently with asymmetric circles, and the resulting values seem sane and can undistort images in a sane way.</p>
<p>There is an object in the images with known dimensions -- it's a table. By hand, I've identified the x,y pixel coordinates of 8 corresponding key points in each image (6 on the table top plane, 2 below in the table's legs). I know the true 3-D coordinates of those 8 points in the scene because I measured them.</p>
<p>How can I use the 2 camera matrices, 2 distortion vectors, 2 vectors of 8 corresponding 2-D points, 1 vector of 8 corresponding 3-D points to arrive at a formula/algorithm to approximate new 3-D points given their 2-D location in each image? I've been testing by trying to recreate the 3-D location of those 8 points, but I plan to use it on new features in phase 2 of this project.</p>
<p>Here's what I've tried so far.</p>
<p>Attempt #1:</p>
<ul>
<li><p>stereoCalibrate to get rotation and translation between cameras</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #2:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>get the relative rotation and translation between the cameras by subtraction one rotation vector from the other and one translation vector from the other (yes, this could easily be wrong)</p></li>
<li><p>stereoRectify to get left and right projection matrices</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>Attempt #3:</p>
<ul>
<li><p>solvePnP, independently for left and right, on the 2-D and 3-D points to arrive at rotation and translation</p></li>
<li><p>undistortPoints on the 2-D points</p></li>
<li><p>make a 3x4 projection matrix for left and right as [R | T] (yes, this could easily be wrong but I must have read it somewhere)</p></li>
<li><p>triangulatePoints using the two projection matrices and the two sets of undistorted points, and convert from homogeneous to "normal" using convertPointsFromHomogeneous</p></li>
</ul>
<p>If I had to guess, I'd say Attempt #1 is the best as it uses the higher level stereoCalibrate and it just so happens that the length of the translation vector is 132cm -- maybe a coincidence, but that is the distance between the cameras.</p>
<p>However, all these attempts (and many minor variations) give 3D answers that seem to be nonsense. For instance, one of the points is given as 50 meters away from the scene. They don't resemble the 3D points used as inputs.</p>
<p>This is my first OpenCV project, so I'm sure I'm doing something foolish. I have done a lot of reading ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=65830#post-id-65830Yup, same here. After much research it seems to me that almost no one gets this to work.Wed, 08 Jul 2015 14:38:52 -0500http://answers.opencv.org/question/8369/triangulate-to-3-d-on-corresponding-2-d-points/?comment=65830#post-id-65830