OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 04 Apr 2019 07:10:36 -0500Extracting the Essential matrix from the Fundamental matrixhttp://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/Hello everybody,
today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.
**INTRODUCTION**
I'm implementing an algorithm able to recover the **calibration** of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of **K_00**, **K_01**, **D_00**, **D_01** (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.
I do the following:
- Starting from the raw distorted images, I apply the undistortion using the intrinsics.
- Extract corresponding points from the **Left** and **Right** images
- Match them using a matcher (FLANN or BFMatcher or whatever)
- Filter the matched points with an outlier rejection algorithm (I checked the result visually)
- Call **findFundamentalMat** to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)
If I try to calculate the error of the points correspondence applying `x' * F * x = 0` the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.
**THE PROBLEM**
First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):
cv::Mat E = K_01.t() * fundamentalMat* K_00;
I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):
cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship `x'^T * E * x=0 ` *(HZ page 257, formula 9.11)* (I iterate over all the normalized coordinates)
cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
Every execution of the algorithm, the value of the Fundamental Matrix **slightly** change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!
I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.
I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.
**EDIT** one previous error was in the formula, I've calculated the residual of E as `x^T * E * x'=0 ` instead of `x'^T * E * x=0`. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have to.
**EDIT** This is the different SVD singular value result:
cv::SVD produce this result:
>133.70399
>127.47910
>0.00000
while Eigen::SVD produce the following:
>1.00777
>0.00778
>0.00000
Okay maybe is not an OpenCV related problem, for sure, but any help is more than welcomeMon, 04 Mar 2019 11:56:14 -0600http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211190#post-id-211190I'll try both of them. the suggestion you gave to me and I'll let you know. By the way, thank you for your time, I really appreciate it!Thu, 04 Apr 2019 07:10:36 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211190#post-id-211190Comment by Eduardo for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211189#post-id-211189There are other libraries that can perform this kind of vision task. You should be able to do the same thing (detect keypoints, fundamental matrix estimation, etc.) in Python ([https://scikit-image.org/](https://scikit-image.org/)) or in Matlab and compare the results.Thu, 04 Apr 2019 07:01:56 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211189#post-id-211189Comment by Eduardo for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211188#post-id-211188The only difference I see between undistort the images --> detect keypoints vs detect keypoints --> undistort the points:
- if the images are strongly distorted, it could help to undistort the images first to ease the keypoints matching process
- but undistorting the images could also introduce some image artifacts due to the interpolation
After undistorting the images, the distortion coefficients should be set to zero or to `noArray()` for the function calls that need distortion coefficients.
For the SVD, you can store the raw matrix and compare the results with different libraries (Matlab, Python, ...). Maybe for singular or ill-conditioned matrix you can have different results between OpenCV and Eigen, but most of the time the results must match.Thu, 04 Apr 2019 06:52:47 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211188#post-id-211188Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211172#post-id-211172What I've done so far is load an images, undistort using the radial/tangential distortion coefficient from calibration. After that I compute the corresponding point and then I call the the findFundamentalMat, using the same camera matrix used for undistort. But the point at the end look too much distorted, so I suppose that the undistortio is applied twice. So the correct pipeline is -> load images, compute corresponding point, undistort them and call findFundamentalMat? And regarding SVD and the singular value, have you any suggestion? (take a look into the original answer)Thu, 04 Apr 2019 04:18:30 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211172#post-id-211172Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211171#post-id-211171What I've done so far is load an images, undistort using the radial/tangential distortion coefficient from calibration. After that I compute the corresponding point and then I call the the findFundamentalMat, using the same camera matrix used for undistort. But the point at the end look too much distorted, so I suppose that the undistortio is applied twice. So the correct pipeline is -> load images, compute corresponding point, undistort them and call findFundamentalMat? And regarding SVD and the singular value, have you any suggestion?Thu, 04 Apr 2019 04:17:59 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211171#post-id-211171Comment by Eduardo for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211168#post-id-211168You are using [`undistortPoints()`](https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e)?
When looking at the [equations](https://docs.opencv.org/master/d9/d0c/group__calib3d.html#details), an image ray projected onto the normalized camera plane is distorted according to some estimated radial and tangential distortion coefficients. And then projected onto the image plane with the focal length and the principal points.
To undistort, reverse perspective projection is applied with `cameraMatrix`. Points are distorted but expressed in the normalized camera frame. Then, they are undistorted. To get points in the image coordinates, you have to pass the same camera matrix (`P`) in my opinion.Thu, 04 Apr 2019 03:21:10 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211168#post-id-211168Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211146#post-id-211146A new question was pointed out by a colleague regarding this problem. I first undistort point and I've seen that in the function another camera matrix can be provided, I think that I'm missing that but I cannot figure it out which matrix I need. And also, it is correct to perform the undistortion before the feature matching? I've 3 books regarding computer vision (HZ, Kanatani et al, Learning OpenCV) but I still doesn't understand which camera matrix use and when...Wed, 03 Apr 2019 09:59:17 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=211146#post-id-211146Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210213#post-id-210213Thank you again Eduardo. I've already seen your post (thank for being so exhaustive) and in fact I calculate the mean error of my fundamental matrix as you do in your example (I've copied and pasted the code eheh): the value I got is usually under 0.1. Like now, after an execution, I got 0.05 as mean error, sometimes 0.018.. are this measure in pixels?
findEssentialMat (and most of OpenCV function in the same module) assumes that the two camera matrices are identical, and it isn't my case (with real camera they can't be tha same...) so I can't use it, but luckily your function will help me. However, if I try to find the Essential matrix using the formula mentioned above, every time the matrix is very dirrent each execution, and this drive me to a bad solution at the endWed, 13 Mar 2019 05:46:23 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210213#post-id-210213Comment by Eduardo for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210190#post-id-210190In [this answer](http://answers.opencv.org/question/206817/extract-rotation-and-translation-from-fundamental-matrix/?answer=206978#post-id-206978) you can find some code I wrote to play with the fundamental / essential matrix.
The idea is the following:
- generate two viewpoints (the transformation between the left and right cameras is known)
- estimate the fundamental matrix
- get the essential matrix
- recover the transformation (R and t) between the left/right cameras from the essential matrixTue, 12 Mar 2019 15:12:44 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210190#post-id-210190Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210184#post-id-210184Hi Eduardo thank you for the reply. I've edited my question adding more details since today I've found that the problem is not in SVD but is in the Essential Matrix. Any help is really appreciated, eventually even with some references :) Thanks in advance to everybody that give an help!Tue, 12 Mar 2019 12:23:28 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210184#post-id-210184Comment by Eduardo for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210183#post-id-210183Yes the formula is correct: `E = K2^T * F * K1`. See also [here](https://github.com/opencv/opencv_contrib/blob/75fcfa609316456a83d0c5878b65e81dcf3a0fcd/modules/sfm/src/fundamental.cpp#L429).Tue, 12 Mar 2019 11:50:02 -0500http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=210183#post-id-210183Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209826#post-id-209826I've edited the question, thank you for the reply :)Tue, 05 Mar 2019 03:45:14 -0600http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209826#post-id-209826Comment by berak for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209821#post-id-209821all i'm saying is: you should rule out the type problem, it's not relevantTue, 05 Mar 2019 03:32:53 -0600http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209821#post-id-209821Comment by HYPEREGO for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209820#post-id-209820Indeed, as for the 8 point algorithm. So that can't be the problem of my algorithm. Regarding the SVD you know how it is performed in OpenCV?Tue, 05 Mar 2019 03:29:48 -0600http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209820#post-id-209820Comment by berak for <div class="snippet"><p>Hello everybody,</p>
<p>today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.</p>
<p><strong>INTRODUCTION</strong></p>
<p>I'm implementing an algorithm able to recover the <strong>calibration</strong>of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (<a href="http://www.cvlibs.net/publications/Geiger2013IJRR.pdf">http://www.cvlibs.net/publications/Ge...</a>), I suppose that I know the value of <strong>K_00</strong>, <strong>K_01</strong>, <strong>D_00</strong>, <strong>D_01</strong> (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.</p>
<p>I do the following:</p>
<ul>
<li>Starting from the raw distorted images, I apply the undistortion using the intrinsics.</li>
<li>Extract corresponding points from the <strong>Left</strong> and <strong>Right</strong> images</li>
<li>Match them using a matcher (FLANN or BFMatcher or whatever)</li>
<li>Filter the matched points with an outlier rejection algorithm (I checked the result visually)</li>
<li>Call <strong>findFundamentalMat</strong> to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)</li>
</ul>
<p>If I try to calculate the error of the points correspondence applying <code>x' * F * x = 0</code> the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new. <br>
Since I want to rectify the images, I need the essential matrix. </p>
<p><strong>THE PROBLEM</strong></p>
<p>First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):</p>
<pre><code>cv::Mat E = K_01.t() * fundamentalMat* K_00;
</code></pre>
<p>I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):</p>
<pre><code>cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
</code></pre>
<p>So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship <code>x'^T * E * x=0</code> <em>(HZ page 257, formula 9.11)</em> (I iterate over all the normalized coordinates)</p>
<pre><code>cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
</code></pre>
<p>Every execution of the algorithm, the value of the Fundamental Matrix <strong>slightly</strong> change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! </p>
<p>I tried to decompose the matrix using the ...<span class="expander"> <a>(more)</a></span></p></div>http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209789#post-id-209789> If I simply convert the point from cv::Point2i to cv::Point2f the result doesn't change, I get the same fundamental matrix.
they're [converted to float32 internally](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp#L787), anyway.Mon, 04 Mar 2019 12:26:31 -0600http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/?comment=209789#post-id-209789