# findEssentialMat give different results according to the number of feature points [closed]

Hello,

I use the findEssentialMatrix function on a set of feature points (~ 1200 points) and then I use triangulatePoints function to recover the 3D positions of those feature points. But I have a problem with the findEssentialMatrix function because it seems that the result changes according to the number of points.

For example, if I use 1241 points for one frame, the result is quite good (R= 0.5,0.5,0.5 and t=1,0,0) and if I remove only one point the result is totally different (R=3.0,2.0,2.0 and t=0,0,1). I tried to remove other feature points and sometimes it works and sometimes not. I don't understand why. Is there a reason for that ?

std::vector<cv::Point2d> static_feature_point_t;
std::vector<cv::Point2d> static_feature_point_tmdelta;

// read from file

cv::FileNode feature_point_t       = fs_t["feature_point"];
cv::FileNode feature_point_tmdelta = fs_tmdelta["feature_point"];

fs_t.release();
fs_tmdelta.release();

double focal = 300.;
cv::Point2d camera_principal_point(320, 240);

cv::Mat essential_matrix = cv::findEssentialMat(static_feature_point_t, static_feature_point_tmdelta, focal, camera_principal_point, cv::LMEDS);

cv::Mat rotation, translation;
cv::recoverPose(essential_matrix, static_feature_point_t, static_feature_point_tmdelta, rotation, translation, focal, camera_principal_point);
cv::Mat rot(3,1,CV_64F);
cv::Rodrigues(rotation, rot);
std::cout << "rotation " << rot*180./M_PI << std::endl;
std::cout << "translation " << translation << std::endl;


The two lists of feature points are here (I didn't find how to upload files on the forum or if it is possible)

Thanks,

edit retag reopen merge delete

### Closed for the following reason the question is answered, right answer was accepted by mnchapel close date 2017-05-12 03:24:06.278384

( 2017-05-09 12:33:51 -0500 )edit

( 2017-05-10 03:12:53 -0500 )edit

Did you solve your problem ? can you post data in yml format?

( 2017-05-11 01:25:38 -0500 )edit

No, I didn't. I added data and I changed the code according to the .yml files. If you run the code with all points, it works but if you remove the two last points in static_feature_point_t and static_feature_point_tmdelta, the result is totally different (there are 1241 feature points).

( 2017-05-11 05:23:34 -0500 )edit

Sort by » oldest newest most voted

I took a look at the Opencv Code and it seems that only five points are randomly chosen among all the feature points to compute the essential matrix. So I suppose that the error depends on which points are chosen and cv::RNG::uniform(0,count) is used to choose the points (with count == the number of feature points given to findEssentialMatrix). A priori there is no real solution. I choose randomly six points and if the essential matrix is not good, I compute it again. (Thanks LBerger for your time)

more

Yes I miss this too. It is written in doc : Lmeds Least-Median- of-Squares Use StereoCalibrateif you know marker loaction in space or back to levenberg method

( 2017-05-12 03:47:27 -0500 )edit

Your problem is in Rodrigues function : It does not give you an angle but a vector You don't need to divide by M_PI and multiply by 180 :Rodrigues vector points along the axis of the rotation, and its magnitude is the tangent of half the angle of the rotation

Results are :

Mean marker distance22.0907
Essai 0 with 1241points
rodrigues [-0.008186903247588993, -0.007724206343463332, -0.007796571200079321]
translation [-0.792428255173678, 0.607350228303508, -0.05641950533350303]
Essai 1 with 1141points
rodrigues [-0.006422562844045941, -0.007930323721579204, -0.007210073468255511]
translation [-0.7848951038350637, 0.6194026473826763, -0.01673428788675164]
Essai 2 with 1051points
rodrigues [-0.007931756116646859, -0.008254818133971985, -0.007837048508731704]
translation [-0.7885528566096911, 0.6133322518870682, -0.04481005610165935]
Essai 3 with 971points
rodrigues [-0.005932638328178316, -0.007302101937579567, -0.006471234288512953]
translation [-0.7865388205016264, 0.6173586807921311, -0.01499810303038783]
Essai 4 with 901points
rodrigues [-0.04190340485114951, -0.05506477514402782, -0.005574481988185056]
translation [0.05958445984982116, -0.03116133096247916, -0.9977367706950827]
Essai 5 with 841points
rodrigues [-0.007732722496448771, -0.008182869208508877, -0.007570432016673033]
translation [-0.7867221935703506, 0.6156568405101853, -0.04510925489156264]
Essai 6 with 791points
rodrigues [-0.006090562167324761, -0.007260625571239707, -0.006922225724916698]
translation [-0.7856904222816165, 0.6181434310759613, -0.02427465659022878]
Essai 7 with 751points
rodrigues [-0.006480131550192582, -0.007292119308853618, -0.006526800541123394]
translation [-0.7906105072805985, 0.6120260023408047, -0.01895252585404144]
Essai 8 with 721points
rodrigues [-0.002347222579832763, -0.007466597396004799, -0.005961284930281934]
translation [-0.7572073975061766, 0.6529732600544519, 0.01621353804029994]
Essai 9 with 701points
rodrigues [-0.006929786646756651, -0.007944380422144397, -0.007456723468566674]
translation [-0.7846584369744524, 0.6189268459772316, -0.0352235235813468]


with this program :

#include<opencv2/opencv.hpp>

using namespace std;
using namespace cv;
#define M_PI acos(-1.0)
int main(int argc, char *argv[])
{
vector<cv::Point2d> static_feature_point_t;
std::vector<cv::Point2d> static_feature_point_tmdelta;

f1["feature_point"] >> static_feature_point_t;
f1.release();
f2["feature_point"] >> static_feature_point_tmdelta;
f2.release();
Mat x(500, 500, CV_8UC3, Scalar(0, 0, 0));
Mat y(500, 500, CV_8UC3, Scalar(0, 0, 0));
double d = 0;
for (int i = 0; i < static_feature_point_t.size(); i++)
{
circle(x, static_feature_point_t[i], 3, Scalar(0, 0, 255));
circle(y, static_feature_point_tmdelta[i], 3, Scalar(0, 255, 255));
d += norm(static_feature_point_t[i] - static_feature_point_tmdelta[i]);
}
cout << "Mean marker distance" << d / static_feature_point_t.size() << "\n";
imshow("ptx", x);
imshow("pty", y);
waitKey();

for (int i=0;i<10;i++)
{
double focal = 300.;
cv::Point2d camera_principal_point(320, 240);
cv::Mat essential_matrix = cv::findEssentialMat(static_feature_point_t, static_feature_point_tmdelta, focal, camera_principal_point, cv::LMEDS);
cv::Mat rotation=Mat::zeros(3,3,CV_64F), translation=Mat::zeros(3, 1, CV_64F);
cv::recoverPose(essential_matrix, static_feature_point_t, static_feature_point_tmdelta, rotation, translation, focal, camera_principal_point);
cv::Mat rot(3, 1, CV_64F);
cv::Rodrigues(rotation, rot);
cout << "Essai " << i << " with " << static_feature_point_t.size() << "points\n";
//std::cout << "E " << essential_matrix << std::endl;
//std::cout << "rotation " << rotation << std::endl;
std::cout << "rodrigues " << rot.t() << std::endl;
std::cout << "translation " << translation.t() << std::endl;
for (int j = i; j < 10; j++)
{
static_feature_point_t.erase(static_feature_point_t.begin() + j * 20, static_feature_point_t.begin() + j * 20 + 10);
static_feature_point_tmdelta.erase(static_feature_point_tmdelta.begin() + j * 20, static_feature_point_tmdelta.begin() + j * 20 + 10);
}
}
}

more

Thanks for your answer. Actually I didn't see that I misused the Rodrigues function so thank you for that. But if you look at the result of "Essai 4", you can see that there is still a problem about the translation vector. For all the other tests, translation vector is about [-0.8, 0.6, 0.0] and for the Essai 4 [0.0, 0.0, -1].

( 2017-05-11 11:29:55 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2017-05-09 12:30:09 -0500

Seen: 1,305 times

Last updated: May 12 '17