OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 30 Jul 2014 09:16:11 -0500opengl Triangulate Pointshttp://answers.opencv.org/question/38252/opengl-triangulate-points/Hello,
Today we're doing SFM and we want to draw the result in OpenGL.
So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.
Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.
/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
Using these two sets of points, we create the fundemental matrix and essential matrix.
/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
Using SVD on the essential matrix, yields the rotation and translation vectors.
/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
Triangulate points.
cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
We can then convert to homogenous points, which are 0 centered, i believe.
/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
Using OpenGL we can draw points.
Here are a number of questions that i am still having difficulty with.
1. What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?
2. How can I use these values in OpenGL?
Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.
cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
3. Should I be using the Rodriquez transform here? What does it do?
> "In the theory of three-dimensional
> rotation, Rodrigues' rotation formula
> (named after Olinde Rodrigues) is an
> efficient algorithm for rotating a
> vector in space, given an axis and
> angle of rotation. By extension, this
> can be used to transform all three
> basis vectors to compute a rotation
> matrix from an axis–angle
> representation. In other words, the
> Rodrigues formula provides an
> algorithm to compute the exponential
> map from so(3) to SO(3) without
> computing the full matrix exponent."
Mon, 28 Jul 2014 04:55:10 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/Comment by StevenPuttemans for <p>Hello,</p>
<p>Today we're doing SFM and we want to draw the result in OpenGL.</p>
<p>So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.</p>
<p>Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.</p>
<pre><code>/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
</code></pre>
<p>Using these two sets of points, we create the fundemental matrix and essential matrix.</p>
<pre><code>/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
</code></pre>
<p>Using SVD on the essential matrix, yields the rotation and translation vectors.</p>
<pre><code>/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
</code></pre>
<p>Triangulate points.</p>
<pre><code>cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
</code></pre>
<p>We can then convert to homogenous points, which are 0 centered, i believe.</p>
<pre><code>/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
</code></pre>
<p>Using OpenGL we can draw points.</p>
<p>Here are a number of questions that i am still having difficulty with.</p>
<ol>
<li>What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?</li>
<li>How can I use these values in OpenGL?</li>
</ol>
<p>Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.</p>
<pre><code>cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
</code></pre>
<ol>
<li>Should I be using the Rodriquez transform here? What does it do?</li>
</ol>
<blockquote>
<p>"In the theory of three-dimensional
rotation, Rodrigues' rotation formula
(named after Olinde Rodrigues) is an
efficient algorithm for rotating a
vector in space, given an axis and
angle of rotation. By extension, this
can be used to transform all three
basis vectors to compute a rotation
matrix from an axis–angle
representation. In other words, the
Rodrigues formula provides an
algorithm to compute the exponential
map from so(3) to SO(3) without
computing the full matrix exponent."</p>
</blockquote>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38253#post-id-38253Some markup can do wonders :)Mon, 28 Jul 2014 04:58:50 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38253#post-id-38253Comment by MRDaniel for <p>Hello,</p>
<p>Today we're doing SFM and we want to draw the result in OpenGL.</p>
<p>So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.</p>
<p>Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.</p>
<pre><code>/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
</code></pre>
<p>Using these two sets of points, we create the fundemental matrix and essential matrix.</p>
<pre><code>/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
</code></pre>
<p>Using SVD on the essential matrix, yields the rotation and translation vectors.</p>
<pre><code>/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
</code></pre>
<p>Triangulate points.</p>
<pre><code>cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
</code></pre>
<p>We can then convert to homogenous points, which are 0 centered, i believe.</p>
<pre><code>/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
</code></pre>
<p>Using OpenGL we can draw points.</p>
<p>Here are a number of questions that i am still having difficulty with.</p>
<ol>
<li>What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?</li>
<li>How can I use these values in OpenGL?</li>
</ol>
<p>Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.</p>
<pre><code>cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
</code></pre>
<ol>
<li>Should I be using the Rodriquez transform here? What does it do?</li>
</ol>
<blockquote>
<p>"In the theory of three-dimensional
rotation, Rodrigues' rotation formula
(named after Olinde Rodrigues) is an
efficient algorithm for rotating a
vector in space, given an axis and
angle of rotation. By extension, this
can be used to transform all three
basis vectors to compute a rotation
matrix from an axis–angle
representation. In other words, the
Rodrigues formula provides an
algorithm to compute the exponential
map from so(3) to SO(3) without
computing the full matrix exponent."</p>
</blockquote>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38254#post-id-38254@StevenPuttemans I don't understand, markup?Mon, 28 Jul 2014 05:01:16 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38254#post-id-38254Comment by StevenPuttemans for <p>Hello,</p>
<p>Today we're doing SFM and we want to draw the result in OpenGL.</p>
<p>So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.</p>
<p>Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.</p>
<pre><code>/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
</code></pre>
<p>Using these two sets of points, we create the fundemental matrix and essential matrix.</p>
<pre><code>/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
</code></pre>
<p>Using SVD on the essential matrix, yields the rotation and translation vectors.</p>
<pre><code>/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
</code></pre>
<p>Triangulate points.</p>
<pre><code>cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
</code></pre>
<p>We can then convert to homogenous points, which are 0 centered, i believe.</p>
<pre><code>/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
</code></pre>
<p>Using OpenGL we can draw points.</p>
<p>Here are a number of questions that i am still having difficulty with.</p>
<ol>
<li>What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?</li>
<li>How can I use these values in OpenGL?</li>
</ol>
<p>Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.</p>
<pre><code>cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
</code></pre>
<ol>
<li>Should I be using the Rodriquez transform here? What does it do?</li>
</ol>
<blockquote>
<p>"In the theory of three-dimensional
rotation, Rodrigues' rotation formula
(named after Olinde Rodrigues) is an
efficient algorithm for rotating a
vector in space, given an axis and
angle of rotation. By extension, this
can be used to transform all three
basis vectors to compute a rotation
matrix from an axis–angle
representation. In other words, the
Rodrigues formula provides an
algorithm to compute the exponential
map from so(3) to SO(3) without
computing the full matrix exponent."</p>
</blockquote>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38256#post-id-38256I formatted your code :) you forgot to use the standard makeup for topics ;)Mon, 28 Jul 2014 05:10:17 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38256#post-id-38256Comment by MRDaniel for <p>Hello,</p>
<p>Today we're doing SFM and we want to draw the result in OpenGL.</p>
<p>So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.</p>
<p>Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.</p>
<pre><code>/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
</code></pre>
<p>Using these two sets of points, we create the fundemental matrix and essential matrix.</p>
<pre><code>/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
</code></pre>
<p>Using SVD on the essential matrix, yields the rotation and translation vectors.</p>
<pre><code>/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
</code></pre>
<p>Triangulate points.</p>
<pre><code>cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
</code></pre>
<p>We can then convert to homogenous points, which are 0 centered, i believe.</p>
<pre><code>/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
</code></pre>
<p>Using OpenGL we can draw points.</p>
<p>Here are a number of questions that i am still having difficulty with.</p>
<ol>
<li>What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?</li>
<li>How can I use these values in OpenGL?</li>
</ol>
<p>Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.</p>
<pre><code>cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
</code></pre>
<ol>
<li>Should I be using the Rodriquez transform here? What does it do?</li>
</ol>
<blockquote>
<p>"In the theory of three-dimensional
rotation, Rodrigues' rotation formula
(named after Olinde Rodrigues) is an
efficient algorithm for rotating a
vector in space, given an axis and
angle of rotation. By extension, this
can be used to transform all three
basis vectors to compute a rotation
matrix from an axis–angle
representation. In other words, the
Rodrigues formula provides an
algorithm to compute the exponential
map from so(3) to SO(3) without
computing the full matrix exponent."</p>
</blockquote>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38259#post-id-38259@StevenPuttemans thank you.Mon, 28 Jul 2014 05:28:16 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38259#post-id-38259Answer by Duffycola for <p>Hello,</p>
<p>Today we're doing SFM and we want to draw the result in OpenGL.</p>
<p>So we calculate opticalFlow using Farneback. This gives us two sets of 2d points.</p>
<p>Using the camera matrix, we using undistortPoints to tidy things up using the camera matrix.</p>
<pre><code>/* Undistort the points based on intrinsic params and dist coeff */
cv::undistortPoints( left_points, left_points, cam_matrix, dist_coeff );
cv::undistortPoints( right_points, right_points, cam_matrix, dist_coeff );
</code></pre>
<p>Using these two sets of points, we create the fundemental matrix and essential matrix.</p>
<pre><code>/* Try to find essential matrix from the points */
cv::Mat fundamental = cv::findFundamentalMat( left_points, right_points, CV_FM_RANSAC, 3.0, 0.99 );
cv::Mat essential = cam_matrix.t() * fundamental * cam_matrix;
</code></pre>
<p>Using SVD on the essential matrix, yields the rotation and translation vectors.</p>
<pre><code>/* Find the projection matrix between those two images */
cv::SVD svd( essential );
static const cv::Mat W = (cv::Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const cv::Mat W_inv = W.inv();
cv::Mat_<double> R1 = svd.u * W * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
cv::Mat_<double> R2 = svd.u * W_inv * svd.vt;
cv::Mat_<double> T2 = -svd.u.col( 2 );
R1 and T1 are used to create the second projection matrix.
static const cv::Mat P1 = cv::Mat::eye(3, 4, CV_64FC1 );
cv::Mat P2 =( cv::Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
</code></pre>
<p>Triangulate points.</p>
<pre><code>cv::Mat out;
cv::triangulatePoints( P1, P2, left_points, right_points, out );
</code></pre>
<p>We can then convert to homogenous points, which are 0 centered, i believe.</p>
<pre><code>/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
std::vector<cv::Point3f> pt_3d;
cv::convertPointsHomogeneous(out.reshape(4, 1), pt_3d);
</code></pre>
<p>Using OpenGL we can draw points.</p>
<p>Here are a number of questions that i am still having difficulty with.</p>
<ol>
<li>What are R2 and T2? Rotation from P1 back to P0? The inverse rotation?</li>
<li>How can I use these values in OpenGL?</li>
</ol>
<p>Since we now have P0 and P1, we can decompose these projection matrices to get euler angles. An old tutorial on POSIT mentioned that euler angles are important for OpenGL.</p>
<pre><code>cv::decomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray rotMatrixX=noArray(), OutputArray rotMatrixY=noArray(), OutputArray rotMatrixZ=noArray(), OutputArray eulerAngles=noArray() )
</code></pre>
<ol>
<li>Should I be using the Rodriquez transform here? What does it do?</li>
</ol>
<blockquote>
<p>"In the theory of three-dimensional
rotation, Rodrigues' rotation formula
(named after Olinde Rodrigues) is an
efficient algorithm for rotating a
vector in space, given an axis and
angle of rotation. By extension, this
can be used to transform all three
basis vectors to compute a rotation
matrix from an axis–angle
representation. In other words, the
Rodrigues formula provides an
algorithm to compute the exponential
map from so(3) to SO(3) without
computing the full matrix exponent."</p>
</blockquote>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?answer=38326#post-id-38326> What are R1 and T2?
You are extracting the camera's extrinsic parameters from the essential matrix (homography between two calibrated camera planes). The first camera is assumed to be fixed at `[I|0]` and the solution to the second camera is fourfold ambiguous: `[R1|T1]` or `[R1|T2]` or `[R2|T1]` or `[R2|T2]`. You can identify the correct solution by triangulating a matching pair of 2D image points. In only one of the four solutions the triangulated 3D point will be in front of both camera planes. This is explained in detail in [chapter 9.6.2 of Szeliski's book](http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf).
It would be interesting to know where you got the code from, because I am currently trying to find out as well how to project the points and test if they are in front of the cameras.
Note, in Szeliski's book the assumption is made that the SVD decomposes the matrix into `U*diag(1,1,0)*V'`. I believe, up to scale, you are good to go if the first two singular values of your decomposition are almost equal and the third is almost zero (look into svd.w).
However, I got a related [problem](http://answers.opencv.org/question/38340/estimate-camera-pose-extrinsic-parameters-from/). My singular values are not equal at all and in summary I chose the following implementation:
static const cv::Mat W_ = (cv::Mat_<double>(3, 3) <<
1, 0, 0,
0, -1, 0,
0, 0, 1);
cv::Mat_<double> R1 = svd.u * diag(svd.w) * svd.vt;
cv::Mat_<double> R2 = svd.u * diag(svd.w) * W_ * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
I am not sure about it, but maybe it helps you. I am also stuck at the point where I have to chose the correct matrix by triangulating a matching point and testing whether it is in front of both cameras.
![image description](/upfiles/14066344712045047.png)
Image from Hartley, R. I. and Zisserman, A. - Multiple View Geometry in Computer Vision, 2004Tue, 29 Jul 2014 06:50:22 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?answer=38326#post-id-38326Comment by Duffycola for <blockquote>
<p>What are R1 and T2?</p>
</blockquote>
<p>You are extracting the camera's extrinsic parameters from the essential matrix (homography between two calibrated camera planes). The first camera is assumed to be fixed at <code>[I|0]</code> and the solution to the second camera is fourfold ambiguous: <code>[R1|T1]</code> or <code>[R1|T2]</code> or <code>[R2|T1]</code> or <code>[R2|T2]</code>. You can identify the correct solution by triangulating a matching pair of 2D image points. In only one of the four solutions the triangulated 3D point will be in front of both camera planes. This is explained in detail in <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf">chapter 9.6.2 of Szeliski's book</a>.</p>
<p>It would be interesting to know where you got the code from, because I am currently trying to find out as well how to project the points and test if they are in front of the cameras.</p>
<p>Note, in Szeliski's book the assumption is made that the SVD decomposes the matrix into <code>U*diag(1,1,0)*V'</code>. I believe, up to scale, you are good to go if the first two singular values of your decomposition are almost equal and the third is almost zero (look into svd.w).</p>
<p>However, I got a related <a href="http://answers.opencv.org/question/38340/estimate-camera-pose-extrinsic-parameters-from/">problem</a>. My singular values are not equal at all and in summary I chose the following implementation:</p>
<pre><code>static const cv::Mat W_ = (cv::Mat_<double>(3, 3) <<
1, 0, 0,
0, -1, 0,
0, 0, 1);
cv::Mat_<double> R1 = svd.u * diag(svd.w) * svd.vt;
cv::Mat_<double> R2 = svd.u * diag(svd.w) * W_ * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
</code></pre>
<p>I am not sure about it, but maybe it helps you. I am also stuck at the point where I have to chose the correct matrix by triangulating a matching point and testing whether it is in front of both cameras.</p>
<p><img alt="image description" src="/upfiles/14066344712045047.png"></p>
<p>Image from Hartley, R. I. and Zisserman, A. - Multiple View Geometry in Computer Vision, 2004</p>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38415#post-id-38415It appears there is some code available in Chapter 4 of "Mastering OpenCV" book. See here: https://github.com/MasteringOpenCV/code/blob/master/Chapter4_StructureFromMotion/FindCameraMatrices.cppWed, 30 Jul 2014 08:57:28 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38415#post-id-38415Comment by Duffycola for <blockquote>
<p>What are R1 and T2?</p>
</blockquote>
<p>You are extracting the camera's extrinsic parameters from the essential matrix (homography between two calibrated camera planes). The first camera is assumed to be fixed at <code>[I|0]</code> and the solution to the second camera is fourfold ambiguous: <code>[R1|T1]</code> or <code>[R1|T2]</code> or <code>[R2|T1]</code> or <code>[R2|T2]</code>. You can identify the correct solution by triangulating a matching pair of 2D image points. In only one of the four solutions the triangulated 3D point will be in front of both camera planes. This is explained in detail in <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf">chapter 9.6.2 of Szeliski's book</a>.</p>
<p>It would be interesting to know where you got the code from, because I am currently trying to find out as well how to project the points and test if they are in front of the cameras.</p>
<p>Note, in Szeliski's book the assumption is made that the SVD decomposes the matrix into <code>U*diag(1,1,0)*V'</code>. I believe, up to scale, you are good to go if the first two singular values of your decomposition are almost equal and the third is almost zero (look into svd.w).</p>
<p>However, I got a related <a href="http://answers.opencv.org/question/38340/estimate-camera-pose-extrinsic-parameters-from/">problem</a>. My singular values are not equal at all and in summary I chose the following implementation:</p>
<pre><code>static const cv::Mat W_ = (cv::Mat_<double>(3, 3) <<
1, 0, 0,
0, -1, 0,
0, 0, 1);
cv::Mat_<double> R1 = svd.u * diag(svd.w) * svd.vt;
cv::Mat_<double> R2 = svd.u * diag(svd.w) * W_ * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
</code></pre>
<p>I am not sure about it, but maybe it helps you. I am also stuck at the point where I have to chose the correct matrix by triangulating a matching point and testing whether it is in front of both cameras.</p>
<p><img alt="image description" src="/upfiles/14066344712045047.png"></p>
<p>Image from Hartley, R. I. and Zisserman, A. - Multiple View Geometry in Computer Vision, 2004</p>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38416#post-id-38416Wow, look at this more detailed explanation: http://answers.opencv.org/question/27155/from-fundamental-matrix-to-rectified-images/Wed, 30 Jul 2014 09:16:11 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38416#post-id-38416Comment by StevenPuttemans for <blockquote>
<p>What are R1 and T2?</p>
</blockquote>
<p>You are extracting the camera's extrinsic parameters from the essential matrix (homography between two calibrated camera planes). The first camera is assumed to be fixed at <code>[I|0]</code> and the solution to the second camera is fourfold ambiguous: <code>[R1|T1]</code> or <code>[R1|T2]</code> or <code>[R2|T1]</code> or <code>[R2|T2]</code>. You can identify the correct solution by triangulating a matching pair of 2D image points. In only one of the four solutions the triangulated 3D point will be in front of both camera planes. This is explained in detail in <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf">chapter 9.6.2 of Szeliski's book</a>.</p>
<p>It would be interesting to know where you got the code from, because I am currently trying to find out as well how to project the points and test if they are in front of the cameras.</p>
<p>Note, in Szeliski's book the assumption is made that the SVD decomposes the matrix into <code>U*diag(1,1,0)*V'</code>. I believe, up to scale, you are good to go if the first two singular values of your decomposition are almost equal and the third is almost zero (look into svd.w).</p>
<p>However, I got a related <a href="http://answers.opencv.org/question/38340/estimate-camera-pose-extrinsic-parameters-from/">problem</a>. My singular values are not equal at all and in summary I chose the following implementation:</p>
<pre><code>static const cv::Mat W_ = (cv::Mat_<double>(3, 3) <<
1, 0, 0,
0, -1, 0,
0, 0, 1);
cv::Mat_<double> R1 = svd.u * diag(svd.w) * svd.vt;
cv::Mat_<double> R2 = svd.u * diag(svd.w) * W_ * svd.vt;
cv::Mat_<double> T1 = svd.u.col( 2 );
</code></pre>
<p>I am not sure about it, but maybe it helps you. I am also stuck at the point where I have to chose the correct matrix by triangulating a matching point and testing whether it is in front of both cameras.</p>
<p><img alt="image description" src="/upfiles/14066344712045047.png"></p>
<p>Image from Hartley, R. I. and Zisserman, A. - Multiple View Geometry in Computer Vision, 2004</p>
http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38377#post-id-38377Nice explanation! +1 for the effort!Wed, 30 Jul 2014 03:33:52 -0500http://answers.opencv.org/question/38252/opengl-triangulate-points/?comment=38377#post-id-38377