Ask Your Question

Nepolo's profile - activity

2019-04-16 01:11:32 -0600 received badge  Popular Question (source)
2015-08-22 11:05:07 -0600 asked a question recoverPose translation values

Hi,I use findEssentialMat and recoverPose. (opencv 3)

I have a small problem with the translation value t. The rotation values (euler angles e) are very exact. The t values do not match. Maybe I'm expecting something else.

The t variable gives me 3 values. The three values are the relative distances (x,y,z) from camera 1 to camera 2? Is that right?

image description

Mat E, R, t, mask,mtxR,mtxQ,Qx,Qy,Qz;
cv::Point2d pp(7200,5400);
cv::Vec3d e; double focal=14285;

// EssentialMatrix
E=cv::findEssentialMat(image1_points,image2_points,focal,pp,FM_RANSAC,0.99,3,mask);

recoverPose(E, image1_points, image2_points, R, t, focal, pp, mask);

cout <<  t[0] <<endl;  //x result 0.7341315867809203    IS  t[0] the x value?
cout <<  t[1] <<endl;  //y result 0.06380004020757256  IS  t[1] the y value?
cout <<  t[2] <<endl;  //z result 0.6760032308798827    IS  t[2] the z value?


// euler angles e
e= RQDecomp3x3(R, mtxR,mtxQ,Qx, Qy, Qz);
cout <<  e[0] <<endl;  //x result 14.8789 
cout <<  e[1] <<endl;  //y result -49.178 
cout <<  e[2] <<endl;  //z result -20.6066
2015-08-20 16:22:41 -0600 commented answer Position and Rotation of two cameras. Which function's I need? In what order?

Thank you very much. After long time I could continue testing the code. That was a great help.

2015-06-20 11:18:05 -0600 received badge  Supporter (source)
2015-06-20 11:14:11 -0600 answered a question Position and Rotation of two cameras. Which function's I need? In what order?

Hi, thanks for the answers.

Do you have the (relative) 3d position of your points.

I don’t know the (relative) 3d position of the points. I know the two images, the focal length and the sensor size of the cameras.

I tested with Blender if that's possible. Blender motion tracking can compute the relative position and rotation of the two cameras. Blender needs 8 points in both images. The result is very exact.

It is possible. But how?

I have found the openCv function findFundamentalMat.

findFundamentalMat also requires min 8 points* in both images. This is the same rule as in Blender.

*CV_FM_8POINT for an 8-point algorithm. N>=8

CV_FM_RANSAC for the RANSAC algorithm. N>=8

CV_FM_LMEDS for the LMedS algorithm. N>=8

And I found the function stereoCalibrate.

UPDATE 2015.06.29 -----UPDATE 2015.06.29------- UPDATE 2015.06.29

Hi. thanks for the comments.

Why 8 Points? As LBerger said, every pair of points gives you a constraint in the form of p2'Fp1 = 0. Why you need 8 points is not that obvious.

The animation programm Blender can compute the points and position of the camera. Blender say that you needs min 8 points.

I take a look at the function computeCorrespondEpilines. Here is the result. Image 2 looks like very good.

Image 1

image description

Image 2

image description

I have copied the image into Blender.

Here is the result. image description

But what happens next? How can I get the following values? X, Y, Z, Roll, Pitch and Yaw

image description

Here is my code.

#include <QCoreApplication>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;


Mat image1;
Mat image2;
Mat  F;

int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    // Open image 1
    QString fileName = "cam1c.jpg";   
   image1= imread(fileName.toAscii().data());

   // Open image 2
   QString fileName2 = "cam2c.jpg";
  image2= imread(fileName2.toAscii().data());



  // this are the points
  // I added the point values manually (it is a test)
  vector<Point2f> image1_points;
  vector<Point2f> image2_points;
  image1_points.push_back(Point(403.83,299.63));
  image2_points.push_back(Point(401.38 ,300.03));

  image1_points.push_back(Point(311.5,388.5));
  image2_points.push_back(Point(310.45 ,378.28));

  image1_points.push_back(Point(741.9,72.08));
  image2_points.push_back(Point(567.58 ,160.20));

  image1_points.push_back(Point(488.45,211.58));
  image2_points.push_back(Point(397.43 ,237.73));

  image1_points.push_back(Point(250.6,200.43));
  image2_points.push_back(Point(314.95 ,229.7));

  image1_points.push_back(Point(171,529.08));
  image2_points.push_back(Point(359.9 ,477.5));

  image1_points.push_back(Point(400,227.75));
  image2_points.push_back(Point(272.78 ,251.90));

  image1_points.push_back(Point(513.95,414));
  image2_points.push_back(Point(508.15 ,361.03));

  image1_points.push_back(Point(280.68,140.9));
  image2_points.push_back(Point(223.55 ,178.93));

  image1_points.push_back(Point(479.58,220.48));
  image2_points.push_back(Point(355.98 ,244.63));

  image1_points.push_back(Point(621.95,122.48));
  image2_points.push_back(Point(454.78 ...
(more)
2015-06-20 11:12:05 -0600 answered a question Position and Rotation of two cameras. Which function's I need? In what order?

Hi, thanks for the answers.

Do you have the (relative) 3d position of your points.

I don’t know the (relative) 3d position of the points. I know the two images, the focal length and the sensor size of the cameras.

I tested with Blender if that's possible. Blender motion tracking can compute the relative position and rotation of the two cameras. Blender needs 8 points in both images. The result is very exact.

It is possible. But how?

I have found the openCv function findFundamentalMat.

findFundamentalMat also requires min 8 points* in both images. This is the same rule as in Blender.

*CV_FM_8POINT for an 8-point algorithm. N>=8

CV_FM_RANSAC for the RANSAC algorithm. N>=8

CV_FM_LMEDS for the LMedS algorithm. N>=8

And I found the function stereoCalibrate.

2015-06-18 12:27:17 -0600 received badge  Student (source)
2015-06-18 11:06:42 -0600 received badge  Editor (source)
2015-06-18 10:50:22 -0600 asked a question Position and Rotation of two cameras. Which function's I need? In what order?

I would like to compute the relative position and Rotation of two cameras with openCv. I use two images with dots. Here are the two images.

image descriptionhttp://www.bilder-upload.eu/show.php?...

image descriptionhttp://www.bilder-upload.eu/show.php?...

I know the horizontal value (X) and vertical value (Y) of each points in the two images.

I would like to compute the relative position and rotation of the two cameras.

I tested with Blender if that's possible. With motion tracking Blender was able to compute the relative position and rotation. It takes 8 points or more. The result is very exact.

Here is my Blender test 3D View.

image descriptionhttp://www.bilder-upload.eu/show.php?...

I found many openCv functions. But I do not know which function's I need. stereoCalibrate ? findFundamentalMat?

Which function's I need? In what order?