OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 11 Mar 2015 06:06:43 -0500Fundamental Matrix Accuracyhttp://answers.opencv.org/question/57267/fundamental-matrix-accuracy/ I am working on 3d Reconstruction of Scene. I matched the feature of 2 images. I have the Keypoints1 and Keypoints2. I have Fundamental F and Essential Matrix E. Now I have to check X'FX =0. I know X' is the coordinate of second image and X is the coordinate of image of the first image. I have few questions can anyone please help me.
Is X' and X are the keypoints1 and keypoints2 ?
Is it (x, y) coordinate of that keypoints?
The product of X'FX, this will be in Mat format, if I not wrong. How this will be equal to 0?
Please can anyone help, sorry if my question is silly.
Thanks in advance.SUHASWed, 11 Mar 2015 06:06:43 -0500http://answers.opencv.org/question/57267/Reprojection error with findFundamentalMathttp://answers.opencv.org/question/53955/reprojection-error-with-findfundamentalmat/ Hello,
Maybe my question is not really appropriate here.
As the function findFundamentalMat (with a RANSAC method) does not return the list of outliers, I was trying to use the fundamental matrix returned by the function to compute myself the reprojection error for the input points.
I looked into the [source code](https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/ptsetreg.cpp) and I discovered that there is a function called findInliers which call computeError to compute the error for the input points using the fundamental matrix estimated in the current iteration.
When I checked this function:
const Point3f* from = m1.ptr<Point3f>();
const Point3f* to = m2.ptr<Point3f>();
const double* F = model.ptr<double>();
for(int i = 0; i < count; i++ )
{
const Point3f& f = from[i];
const Point3f& t = to[i];
double a = F[0]*f.x + F[1]*f.y + F[ 2]*f.z + F[ 3] - t.x;
double b = F[4]*f.x + F[5]*f.y + F[ 6]*f.z + F[ 7] - t.y;
double c = F[8]*f.x + F[9]*f.y + F[10]*f.z + F[11] - t.z;
errptr[i] = (float)std::sqrt(a*a + b*b + c*c);
}
for me model is the current estimated fundamental matrix which seems to have 12 elements instead of being a 3x3 matrix.
Is there a problem or am I missing something ? Can someone explain me the formula for computing the error ?
To compute the error, I think I will use the distance error between the real 2d location in the image to the corresponding epipolar line using the fundamental matrix for each input points, but I want to understand what is behind the formula used in the source code of OpenCV.
Thanks.
Edit:
I was wrong, [findFundamentalMat](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat) returns also the list of inliers in a cv::Mat with the C++ interface.EduardoWed, 28 Jan 2015 07:44:52 -0600http://answers.opencv.org/question/53955/Decomposition of essential matrix leads to wrong rotation and translationhttp://answers.opencv.org/question/30824/decomposition-of-essential-matrix-leads-to-wrong-rotation-and-translation/Hi,
I am doing some SfM and having troubles getting R and T from the essential matrix.
Here is what I am doing in sourcecode:
Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
Mat E = new Mat();
Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
Core.multiply(E, cameraMatrix, E);
Mat R = new Mat();
Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);
Mat T = new Mat();
calculateRT(E, R, T);
private void calculateRT(Mat E, Mat R, Mat T){
/*
* //-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
*/
Mat w = new Mat();
Mat u = new Mat();
Mat vt = new Mat();
Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
double[] W_Values = {0,-1,0,1,0,0,0,0,1};
Mat W = new Mat(new Size(3,3), CvType.CV_64FC1, new Scalar(W_Values) );
Core.multiply(u, W, R);
Core.multiply(R, vt, R);
T = u.col(2);
}
And here are the results of all matrizes after and during calculation.
Number matches: 10299
Number of good matches: 590
Number of obj_points left: 590.0
Fundamental:
[4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
0.0004532506630569588, -0.0009224427024602799, 1]
Essential:
[0.05410018455525099, 0, 0;
0, 0.8272987826496967, 0;
0, 0, 1]
U:
[0, 0, 1;
0, 0.9999999999999999, 0;
1, 0, 0]
W:
[1; 0.8272987826496967; 0.05410018455525099]
vt:
[0, 0, 1;
0, 1, 0;
1, 0, 0]
R:
[0, 0, 0;
0, 0, 0;
0, 0, 0]
T:
[1; 0; 0]
And for completion here are the image I am using
left: https://drive.google.com/file/d/0Bx9OKnxaua8kXzRFNFRtMlRHSzg/edit?usp=sharing
right: https://drive.google.com/file/d/0Bx9OKnxaua8kd3hyMjN1Zll6ZkE/edit?usp=sharing
Can someone point out where something is goind wrong or what I am doing wrong?
glethienSat, 29 Mar 2014 06:52:14 -0500http://answers.opencv.org/question/30824/