OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 04 Mar 2019 11:56:14 -0600Extracting the Essential matrix from the Fundamental matrixhttp://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/Hello everybody,
today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.
**INTRODUCTION**
I'm implementing an algorithm able to recover the **calibration** of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of **K_00**, **K_01**, **D_00**, **D_01** (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.
I do the following:
- Starting from the raw distorted images, I apply the undistortion using the intrinsics.
- Extract corresponding points from the **Left** and **Right** images
- Match them using a matcher (FLANN or BFMatcher or whatever)
- Filter the matched points with an outlier rejection algorithm (I checked the result visually)
- Call **findFundamentalMat** to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)
If I try to calculate the error of the points correspondence applying `x' * F * x = 0` the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.
**THE PROBLEM**
First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):
cv::Mat E = K_01.t() * fundamentalMat* K_00;
I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):
cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship `x'^T * E * x=0 ` *(HZ page 257, formula 9.11)* (I iterate over all the normalized coordinates)
cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
Every execution of the algorithm, the value of the Fundamental Matrix **slightly** change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!
I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.
I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.
**EDIT** one previous error was in the formula, I've calculated the residual of E as `x^T * E * x'=0 ` instead of `x'^T * E * x=0`. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have to.
**EDIT** This is the different SVD singular value result:
cv::SVD produce this result:
>133.70399
>127.47910
>0.00000
while Eigen::SVD produce the following:
>1.00777
>0.00778
>0.00000
Okay maybe is not an OpenCV related problem, for sure, but any help is more than welcomeHYPEREGOMon, 04 Mar 2019 11:56:14 -0600http://answers.opencv.org/question/209787/Camera projection matrix from fundamentalhttp://answers.opencv.org/question/89418/camera-projection-matrix-from-fundamental/I'm pretty new to OpenCV and trying to puzzle together a monocular AR application **getting structure from motion**. I've got a tracker up and running which tracks points pretty well as the optical flow looks good. It needs to work on uncalibrated cameras.
From the point correspondences I get the fundamental matrix from findFundamentalMat, but I'm lost at how to get the camera projection matrix. Matrix math is not my strong suit, and for all my google foo all I can find are examples using pre-calibrated cameras.
1. Find fundamental matrix using findFundamentalMat (check!)
2. Find epilines with computeCorrespondEpilines (check!)
3. **Extract projection matrix P and P1** (????)
P is identity matrix for the uncalibrated case, but **how do I get P1**?
menneskeSat, 05 Mar 2016 05:03:55 -0600http://answers.opencv.org/question/89418/Using RANSAC mask output in cv::findfundamentalmathttp://answers.opencv.org/question/21440/using-ransac-mask-output-in-cvfindfundamentalmat/Hii
Iam using cv::findfundamentalmat to find the fundamental matrix between two images
Since i need to store the inliers in the data Iam planning to use the RANSAC inlier mask output from the function.
Hence i need to iterate over the mask.
How can i do that?
I have been stuck due to this problem form one week , please help me
cv::Mat mask;
cv::Mat fmat = cv::findFundamentalMat(cv::Mat(imgpts1),cv::Mat(imgpts2),CV_FM_7POINT,3.,.9,mask);
This is the relevant line in my codehariThu, 26 Sep 2013 10:24:10 -0500http://answers.opencv.org/question/21440/How to use five-point.cpphttp://answers.opencv.org/question/17584/how-to-use-five-pointcpp/Hi there,
I notice a nice little function has appeared in opencv : five-point.cpp.
However I can't figure how to use the various functions inside. Is there an example available ?
If the functions are not mature yet, I would be glad to test them.
Best regards,
GuidoGuidoFri, 26 Jul 2013 07:27:27 -0500http://answers.opencv.org/question/17584/Question about the pose from homography and fundamenta matrixhttp://answers.opencv.org/question/16658/question-about-the-pose-from-homography-and-fundamenta-matrix/I have get projection matrix from homography and camera parameter for AR. For checking if the result is good, I tested if ROI (0, 0) (w, 0), (w, h), (0,h) in the coordinate of reference image can be visualized in different image at different view. I assumed that ROI 4 points have zero depth. The result is successful.
Thesedays, I implemented the same function from fundamental matrix.
1. get Fundamental Matrix
2. Essential matrix from Fundamental
3. decompose essential mat into rotation & translation matrix
4. get projection matrix from camera parameter & rotation & translation Matrix.
Like the case I used projection mat from homography, I transformed 4 ROI in reference into the other image at different view. But the result is different from one by homography. Non feasible result was shown as the strange ROI box.
I was wonderingn if the projection matrix by homography is different from one by fundamental matrix?
please let me know how to transform ROI in reference image into the same scenen at different view. sniperSat, 13 Jul 2013 21:32:56 -0500http://answers.opencv.org/question/16658/