OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 14 Jun 2020 03:28:30 -0500Residual error from fundamental matrixhttp://answers.opencv.org/question/227057/residual-error-from-fundamental-matrix/Hi guys,
as in the previous topics I made I'm still working on self calibration stuff. I'm generating the data for the evaluation but I end out with some strange error computing the residual error as defined [here slide 31](http://campar.in.tum.de/twiki/pub/Chair/TeachingWs10Cv2/3D_CV2_WS_2010_TwoView-Fmatrix.pdf). But maybe I'm using the wrong function to compute the norm. The resulting residual error is absurd, while the epipolar equation x'Fx=0 give to me a residual of 0.25 so I suppose is almost perfect.
I've points correspondences for image L and R and the fundamental matrix. Actually I'm doing in this way, I don't care that it isn't efficient since is not important, is just for data generation.
for(int i=0; i<maskInliers.rows; i++)
{
if((uchar)maskInliers.at<uchar>(i) == 1)
{
inliersL.push_back(featuresL.at(i));
inliersR.push_back(featuresR.at(i));
cv::Mat temp_point_1 = cv::Mat(3,1,CV_64F);
temp_point_1.at<double>(0,0) = featuresL.at(i).x;
temp_point_1.at<double>(1,0) = featuresL.at(i).y;
temp_point_1.at<double>(2,0) = 1;
cv::Mat temp_point_2 = cv::Mat(3,1,CV_64F);
temp_point_2.at<double>(0,0) = featuresR.at(i).x;
temp_point_2.at<double>(1,0) = featuresR.at(i).y;
temp_point_2.at<double>(2,0) = 1;
/*******************************
* COMPUTING THE F RESIDUALS
*******************************/
//Epipolar equation x'Fx=0
cv::Mat tempResF = temp_point_2.t()*fundamentalMat*temp_point_1;
residualF += fabs(tempResF.at<double>(0,0));
//Residual error
double resError = cv::norm(temp_point_2-(fundamentalMat*temp_point_1)) +
cv::norm(temp_point_1-(fundamentalMat.t()*temp_point_2));
residualF_error += resError;
}
}
I would like to find the residual error, is there any built in function to do that? I've looked on the documentation but I've not find it.
EDIT: the result I'm getting are the following:
Residual of F 0.250138
Mean residual of F 0.0039084
F RESIDUAL ERROR: 65237.2
where:
- Residual of F is computed using the epipolar geometry x'Fx=0
- mean error is the previous value divided by the number of inliers used for estimating the fundamental matrix
- The last one (F RESIDUAL ERROR) is the one that is wrong and that I'm asking aboutHYPEREGOMon, 02 Mar 2020 11:53:06 -0600http://answers.opencv.org/question/227057/Comparing F matrix and E Matrixhttp://answers.opencv.org/question/231224/comparing-f-matrix-and-e-matrix/I am doing the following:
cv::Mat E = cv::findEssentialMat(points1, points2, camera_matrix, cv::RANSAC, 0.99899999, 5);
cv::Mat F = cv::findFundamentalMat(points1, points2, cv::RANSAC, 5);
cv::Mat F_from_E = camera_matrix * E * camera_matrix.t();
F_from_E /= F_from_E.at<double>(2,2);
Should not F & F_from_E be identical (at least up to Epsilon)? I am getting total different results.. what is wrong?
F =
[7.04979698183469e-06, 0.002432250773974527, -0.4123240414255413;
-0.002437356457829931, 4.279234351782832e-06, 0.3213949830418951;
0.4106037007903602, -0.3267400404863846, 1]
F_from_E =
[-133309825.1056604, -12617730813.88055, -11410224.97616318;
12698132835.10022, 119029809.7687021, 13590317.66456599;
11626592.7208607, -13823099.15009778, 1]Humam HelfawiSun, 14 Jun 2020 03:28:30 -0500http://answers.opencv.org/question/231224/Composing multiple Fundamental/Essential matriceshttp://answers.opencv.org/question/230857/composing-multiple-fundamentalessential-matrices/I have computed the fundamental matrices between `Frame [a,b], [b,c] and [c,d]`. I have now `Fab`, `Fbc` and `Fcd`. Is it possible to compute Fad directly without matching? I am thinking of sth like 3D transformation composing where:
T04 = T01 * T12 * T23 * T34
I think working with fundamental matrices directly won't help. I guess the answer in Essential matrices which I can compute since I have the Camera Calibration matrix.
So, How can I combine two (or more) Fundamental or essential matrices?Humam HelfawiThu, 04 Jun 2020 06:54:00 -0500http://answers.opencv.org/question/230857/Wrong rank in Fundamental Matrixhttp://answers.opencv.org/question/204100/wrong-rank-in-fundamental-matrix/Hi guys,
I'm using the OpenCV for Python3 and, based on the Mastering OpenCV Book, try to compute the epipoles from many images (Structure from Motion algorithm).
In many books, they say which Fundamental Matrix has rank 2. But, the OpenCV function returns a rank 3 matrix.
How can I make this right?
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
pts1 = []
pts2 = []
for m in matches:
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
F, mask = cv2.findFundamentalMat(pts1, pt2,cv2.FM_RANSAC)
pts1 = match['leftPts'][mask.ravel()==1]
pts2 = match['rightPts'][mask.ravel()==1]
# F is the Fundamental Matrix
From that code, the output are like
Processing image 0 and image 1
rank of F: 3
Processing image 0 and image 2
rank of F: 3
Processing image 0 and image 3
rank of F: 3
Processing image 0 and image 4
rank of F: 2
[...]
Someone could help me? Someone have any functional code for SfM using OpenCV?
Thanks in advance.
Lucas Amparo BarbosaMon, 26 Nov 2018 11:04:12 -0600http://answers.opencv.org/question/204100/Questions about the fundamental matrix and homographieshttp://answers.opencv.org/question/189206/questions-about-the-fundamental-matrix-and-homographies/Hi there!
Shortly I learned about the fundamental matrix and have a question that I could not confirm by googling. Maybe you can help me: From what I have read the fundamental matrix is a more general case of the homography as it is independent of scene's structure. So I was wondering if it could be used for image stitching as well. But all papers I found only use homographies. So I reread the material about the properties of the fundamental matrix and now I am wondering:
Is it not possible to use the fundamental matrix for stitching because of its rank deficiency and the fact that it does only relate points in Image 1 to lines in Image 2?
Another question I have regarding homographies: All papers I read about image stitching use homographies for rotational panoramas. What if I want to create a panorama based only on translation between images? Can I use
the homography as well? The answer to that question varies quite a lot.
Kind regards and thanks for your help!
Conundraah
ConundraahThu, 12 Apr 2018 01:54:06 -0500http://answers.opencv.org/question/189206/Wrong Epipolar lines, No Visual sanityhttp://answers.opencv.org/question/181730/wrong-epipolar-lines-no-visual-sanity/Hi,
I've tried using the code given https://docs.opencv.org/3.2.0/da/de9/tutorial_py_epipolar_geometry.html to find the epipolar lines, but instead of getting the output given in the link, I am getting the following output.
![image description](/upfiles/15151486894383214.png)
but when changing the line `F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)` to
`F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC)` i.e: using `RANSAC` algorithm to find Fundamental matrix instead of `LMEDS` this is the following output.
![image description](/upfiles/15151489062631591.png)
When the same line is replaced with `F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_8POINT)` i.e: use eight point algorithm this is the following output.
![image description](/upfiles/15151505056498559.png)
All of the above about output does not have any visual sanity nor anyway near to close to the given output in opencv documentation for finding epipolar lines. But ironically, if the same code if executed by changing the algorithm to find fundamental matrix in this particular sequence
1. FM_LMEDS
2. FM_8POINT
3. FM_7POINT
4. FM_LMEDS
most accurate results are generated. This is the output.
![image description](/upfiles/1515152387699454.png)
I thought we are suppose to get the above output in one run of any of the algorithm (with the variations in matrix values and error). Am I running the code incorrectly? What is that I've to do, to get the correct epipolar lines (i.e.; visually sane)? I am using opencv version 3.3.0 and python 2.7.
Looking forward for reply.
Thank you.salmankhhcuFri, 05 Jan 2018 06:00:03 -0600http://answers.opencv.org/question/181730/Inverse normalization in 8-point algorithm for fundamental matrixhttp://answers.opencv.org/question/178961/inverse-normalization-in-8-point-algorithm-for-fundamental-matrix/ The 8-point algorithm for Fundamental matrix, normalizes the pixel points before solving the linear system of equations and the solution is inverse normalized to get the Fundamental matrix.
Iam refering to the source at
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp
Lines 615 to 620
// apply the transformation that is inverse
// to what we used to normalize the point coordinates
Matx33d T1( scale1, 0, -scale1*m1c.x, 0, scale1, -scale1*m1c.y, 0, 0, 1 );
Matx33d T2( scale2, 0, -scale2*m2c.x, 0, scale2, -scale2*m2c.y, 0, 0, 1 );
F0 = T2.t()*F0*T1;
This appears to be same as the normalization procedure. Iam not able to understand how this is supposed to be inverse of the normalization. Any help is appreciated.
thensWed, 22 Nov 2017 22:00:34 -0600http://answers.opencv.org/question/178961/get rotation from fundamental matrixhttp://answers.opencv.org/question/176270/get-rotation-from-fundamental-matrix/ I wonder if it is possible to get relative rotation between two uncalibrated cameras, based on an image pair that has feature points to be matched between the two cameras?
I read some articles and it sounds to me that it is possible to get the relative rotation between the two cams from the fundamental matrix. but after i searched around I only find solutions using essential mat which needs the camera to be calibrated...
shelpermiscFri, 13 Oct 2017 08:54:09 -0500http://answers.opencv.org/question/176270/Verification of fundamental matrixhttp://answers.opencv.org/question/103829/verification-of-fundamental-matrix/ I have evaluated the fundamental matrix using the following data:
A = [[19, 53], [127, 145], [81, 208], [43, 173], [89, 37], [159, 64], [225, 136], [132, 192], [139, 79]]
B = [[35, 40], [127, 104], [72, 222], [38, 181], [94, 46], [155, 70], [223, 123], [132, 207], [135, 74]]
I have converted them like this:
A = np.float32(A, dtype = "f4")
B = np.float32(B, dtype = "f4")
The matrix then was computed using:
F, M = cv2.findFundamentalMat(A, B, cv2.FM_8POINT)
The matrix I get is:
[[ 1.06647202e-06 2.55389070e-05 -3.32304416e-02]
[ 3.11646650e-05 -2.15515828e-06 -2.41711208e-03]
[ 2.48812445e-02 -3.98441929e-03 1.00000000e+00]]
But I believe that I may have done something wrong. Is the form of the two arrays correct? Can someone verify the result? If not where is the mistake I have made in my code?
Ad_AmThu, 06 Oct 2016 22:04:50 -0500http://answers.opencv.org/question/103829/OpenCV: Essential Matrix SVD decomphttp://answers.opencv.org/question/64534/opencv-essential-matrix-svd-decomp/Hi Folks,
I am trying to get camera motion vector based on OpenCV Optical Flow. I use C# wrapper for Unity of OpenCV 2.4.10, but it is just wrapper Here is the test case:
1. Calibrated my camera and have camera matrix K (3x3)
2. Use 2 100%-identical images framePrev and frameThis as optical flow frames (means no motion)
3. Selected features (2d points) from both images via
<pre><code>goodFeaturesToTrack (frameThis, pointsThis, iGFFTMax, 0.05, 20);
goodFeaturesToTrack (framePrev, pointsPrev, iGFFTMax, 0.05, 20);</code></pre> so i have features pointsPrev and pointsThis<br/><br/>
4. Use <pre><code>calcOpticalFlowPyrLK (framePrev, frameThis, pointsPrev, pointsThis, status, err);</code></pre> to verify flow for points, then I make sure analyzing status and err arrays, so my pointsPrev and pointsThis are identical pairs of points in image pixel coordinates<br/><br/>
5. Select first 8 pairs from pointsPrev and poitsThis (simply trunc arrays), then get Fundamental Matrix: <pre><code>F = Calib3d.findFundamentalMat(pointsPrev, pointsThis, Calib3d.FM_8POINT, 2, 0.99); </code></pre>. When points in all the pairs are identical (no motion) - it gives me 3x3 matrix with all zeros, I suggest that is correct (or?)<br/><br/>
6. Then getting Essential Matrix based on <code>E = K'.(t) * F * K</code> according to HZ 9.12, I have one camera, so K' = K.
<pre><code>gemm (K.t (),F,1,null,0,tmpMat,Core.GEMM_3_T);
gemm (tmpMat,K,1,null,0,E,Core.GEMM_3_T);</code></pre>
when F = |0|, then E = |0| as well<br/><br/>
7. Finally I apply SVD decomposition on E:
<pre><code>SVDecomp(E,W,U,VT);</code></pre></br><br/>
8. Analyzing W, U, VT output matrices, I can observe these values:
<pre></code>
W: 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000
U: -0.577, -0.408, -0.707, -0.577, -0.408, 0.707, 0.577, -0.816, 0.000
Vt: 1.000, 0.000, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 1.000</code></pre>
I suggest these values are strange, as according to books/manuals camera translation vector is U.col(2) is Vector3(-0.7071, 0.7071, 0) that is not correct.
Other observations, when I test for different image frames:
<ul>
<li>U values are always between -1 and 1, that should not be translation, similar more on sine/cosine values (again, 0.7071 is sine of pie/4 or cosine of pie/4)</li>
<li>Fundamental matrix outputs are radically different for different algoriths - 8POINTS, 7POINTS, RANSAC, LMEDS, even for pairs of corresponding poits (features)</li>
<li>using dirrefent number of pairs of points (features) - say 5, 7, 8, 15, 40 - for the same algoriths also radically changes fundamental matrix output</li>
</ul>
I do really need your help, thank you in advance!
That is the copy of my question on StackOverflow :
http://stackoverflow.com/questions/30953989/opencv-essential-matrix-svd-decomp
Kind Regards, Eugene
<b>EDIT 1:</b> Additional observations
Then I tried to find the Fund matrix for these frame poits:
<pre><code>
MatOfPoint2f p1 = new MatOfPoint2f(new Point(100,100),new Point(100,200),new Point(100,300),
new Point (200,100),new Point(200,200),new Point(200,300),
new Point(300,100),new Point(300,200),new Point(300,300));
MatOfPoint2f p2 = new MatOfPoint2f(new Point(80,80),new Point(80,200),new Point(80,320),
new Point (200,80),new Point(200,200),new Point(200,320),
new Point(320,80),new Point(320,200),new Point(320,320));
</code></pre>
The points correspond to case when camera moves forward direction, all the features are center-symmetrical.
When I use findFundamentalMat with 8POINT algorithm - The Fund matrix is
<pre><code>
F = 0.00000000, 0.00010236, -0.02047281, -0.00010236, 0.00000000, 0.02047281, 0.02047281, -0.02047281, 0.00000000, </code></pre>
But when I use RANSAC - the result is
<pre><code>
F = 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000,
</code></pre>
Eugene BartoshSat, 20 Jun 2015 14:48:56 -0500http://answers.opencv.org/question/64534/Fundamental Matrix Accuracyhttp://answers.opencv.org/question/57267/fundamental-matrix-accuracy/ I am working on 3d Reconstruction of Scene. I matched the feature of 2 images. I have the Keypoints1 and Keypoints2. I have Fundamental F and Essential Matrix E. Now I have to check X'FX =0. I know X' is the coordinate of second image and X is the coordinate of image of the first image. I have few questions can anyone please help me.
Is X' and X are the keypoints1 and keypoints2 ?
Is it (x, y) coordinate of that keypoints?
The product of X'FX, this will be in Mat format, if I not wrong. How this will be equal to 0?
Please can anyone help, sorry if my question is silly.
Thanks in advance.SUHASWed, 11 Mar 2015 06:06:43 -0500http://answers.opencv.org/question/57267/Reprojection error with findFundamentalMathttp://answers.opencv.org/question/53955/reprojection-error-with-findfundamentalmat/ Hello,
Maybe my question is not really appropriate here.
As the function findFundamentalMat (with a RANSAC method) does not return the list of outliers, I was trying to use the fundamental matrix returned by the function to compute myself the reprojection error for the input points.
I looked into the [source code](https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/ptsetreg.cpp) and I discovered that there is a function called findInliers which call computeError to compute the error for the input points using the fundamental matrix estimated in the current iteration.
When I checked this function:
const Point3f* from = m1.ptr<Point3f>();
const Point3f* to = m2.ptr<Point3f>();
const double* F = model.ptr<double>();
for(int i = 0; i < count; i++ )
{
const Point3f& f = from[i];
const Point3f& t = to[i];
double a = F[0]*f.x + F[1]*f.y + F[ 2]*f.z + F[ 3] - t.x;
double b = F[4]*f.x + F[5]*f.y + F[ 6]*f.z + F[ 7] - t.y;
double c = F[8]*f.x + F[9]*f.y + F[10]*f.z + F[11] - t.z;
errptr[i] = (float)std::sqrt(a*a + b*b + c*c);
}
for me model is the current estimated fundamental matrix which seems to have 12 elements instead of being a 3x3 matrix.
Is there a problem or am I missing something ? Can someone explain me the formula for computing the error ?
To compute the error, I think I will use the distance error between the real 2d location in the image to the corresponding epipolar line using the fundamental matrix for each input points, but I want to understand what is behind the formula used in the source code of OpenCV.
Thanks.
Edit:
I was wrong, [findFundamentalMat](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat) returns also the list of inliers in a cv::Mat with the C++ interface.EduardoWed, 28 Jan 2015 07:44:52 -0600http://answers.opencv.org/question/53955/findFundamentalMat not correctly filtering outliershttp://answers.opencv.org/question/54824/findfundamentalmat-not-correctly-filtering-outliers/After detecting keypoints and matching them between two images, I run findFundamentalMat to estimate the Fundamental matrix and also filter the outliers. When I draw the matches using the mask I get from findFundamentalMat, there is sometimes some matches that are not filtered out eventhough they clearly don't fit in the transform.
Here is an example of a good filtering (Left image from robot's camre, right image static):
![image description](/upfiles/14235395351249148.png)
But without moving the robot, the matches change a lot from one picture to the next (due to the flickering in the light?)
And often there is one wrong match or two that are left. I suspect those matches to cause the inconsistency in my estimated Fundamental matrix which can look totally different from one image to the next, even without moving the robot.
![image description](/upfiles/14235396878588863.png)
Here the yellow and blue line clearly don't fit to the model. Could they cause the fundamental matrix to go totally wrong?MehdiMon, 09 Feb 2015 21:43:12 -0600http://answers.opencv.org/question/54824/relations of fundamental matrices, projection matrices & reprojections from multiple viewshttp://answers.opencv.org/question/53157/relations-of-fundamental-matrices-projection-matrices-reprojections-from-multiple-views/ I have a (most probably) very basic question in relation to Fundamental Matrices, Projection matrices & reprojections.
I'm trying to determine the 3D coordinates of some points, based on a series of images (actually, 2D coordinates already identified in subsequent images). It seems that when running the same algorithm on subsequent images tracking the same points, I get very different 3D coordinates. Most probably some very basic step is missing from my approach. I'd appreciate any pointers to highlight my mistake :)
I have the following:
K - the intrinsic matrix for the camera, K. all images were taken with the same camera
N images, taken from slightly different positions / orientation of the same points. (it's a sequence of images made by moving the camera around the points)
cca. 40 points are being tracked, and are identified for all images. (not all images see all the points though). thus for each image I have a set of (i, xi, yi) triplets, where 'i' is the identifier of a point (0..39), xi and yi are its 2D coordinates of the point for the particular image.
starting with the above, for each image n = 0..N-1, do the following
1 - take image number img(n) and img(n+1). initially each image has a projection matrix P(n) = I|0
2 - check if they have at least 8 common points
3 - calculate the fundamental & projection matrices for img(n+1)
3.1 calculate the fundamental matrix F using OpenCV's findFundamentalMat() function
3.2 calculate the essential matrix by K.t() * F * K
3.3 decompose the essential matrix using SVD into R and t
3.4 calculate the projection matrix P(n+1) = R|t for img(n+1)
4 - calculate the 3D coordinates
4.1 triangulate the common points using a linear LS triangulation based on P(n) and P(n+1) for all matching points
4.2 reproject the points for P(n+1) through K * P(n+1) * X(i) (where X(i) is the triangulated 3D point for point i)
4.3 check the reprojection error
for each image pair where there is at least 8 corresponding points, I'm getting fairly good results in terms of low reprojection error. but, the 3D points calculated for each pair are widely different, for example, these are some of the triangulated 3D point results for the same tracked point in various images:
[-535.266, 251.398, -1142.35]
[0.862544, -0.39743, 1.84496]
[5.55258, -2.59372, 12.7258]
[20.9094, -7.89917, 56.7389]
[-0.242497, 0.113039, -0.515921]
[18.0375, -8.38645, 38.6765]
my expectation was that they would be close to each other, with only the measurement and some algorithm inaccuracies bringing in some small error. especially as the reprojection errors are quite low, values like 0.03, 0.5 or 3.
the issue might be that the subsequent projection matrices are not 'aligned' in some way. or their scale, etc. is different, even though far all subsequent image (n+1) triangulation is based on the previous images calculated P(n) projection matrix - thus my assumption would be that the triangulation would take place in the same 'space'. but still, the camera images original orientation is very close to each other, so even in this case, I was expecting a moderate rate of error, as opposed to the widely different results I'm getting.
I've experimented with finding the 'best matching' image instead of using directly subsequent images, but the results are not getting better in that case either :(
here is the C++ / OpenCV based source code that I'm trying to get to work: http://pastebin.com/UE6YW39J , and here is a sample input data set for the code: http://pastebin.com/gX3iJgYh , and a camera matrix used: http://pastebin.com/aBMBC1bd
akosmaroyFri, 16 Jan 2015 07:52:36 -0600http://answers.opencv.org/question/53157/Decomposition of essential matrix leads to wrong rotation and translationhttp://answers.opencv.org/question/30824/decomposition-of-essential-matrix-leads-to-wrong-rotation-and-translation/Hi,
I am doing some SfM and having troubles getting R and T from the essential matrix.
Here is what I am doing in sourcecode:
Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
Mat E = new Mat();
Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
Core.multiply(E, cameraMatrix, E);
Mat R = new Mat();
Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);
Mat T = new Mat();
calculateRT(E, R, T);
private void calculateRT(Mat E, Mat R, Mat T){
/*
* //-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
*/
Mat w = new Mat();
Mat u = new Mat();
Mat vt = new Mat();
Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
double[] W_Values = {0,-1,0,1,0,0,0,0,1};
Mat W = new Mat(new Size(3,3), CvType.CV_64FC1, new Scalar(W_Values) );
Core.multiply(u, W, R);
Core.multiply(R, vt, R);
T = u.col(2);
}
And here are the results of all matrizes after and during calculation.
Number matches: 10299
Number of good matches: 590
Number of obj_points left: 590.0
Fundamental:
[4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
0.0004532506630569588, -0.0009224427024602799, 1]
Essential:
[0.05410018455525099, 0, 0;
0, 0.8272987826496967, 0;
0, 0, 1]
U:
[0, 0, 1;
0, 0.9999999999999999, 0;
1, 0, 0]
W:
[1; 0.8272987826496967; 0.05410018455525099]
vt:
[0, 0, 1;
0, 1, 0;
1, 0, 0]
R:
[0, 0, 0;
0, 0, 0;
0, 0, 0]
T:
[1; 0; 0]
And for completion here are the image I am using
left: https://drive.google.com/file/d/0Bx9OKnxaua8kXzRFNFRtMlRHSzg/edit?usp=sharing
right: https://drive.google.com/file/d/0Bx9OKnxaua8kd3hyMjN1Zll6ZkE/edit?usp=sharing
Can someone point out where something is goind wrong or what I am doing wrong?
glethienSat, 29 Mar 2014 06:52:14 -0500http://answers.opencv.org/question/30824/