Ask Your Question
1

Position and Rotation of two cameras. Which function's I need? In what order?

asked 2015-06-18 10:49:46 -0600

Nepolo gravatar image

updated 2015-06-18 15:47:39 -0600

I would like to compute the relative position and Rotation of two cameras with openCv. I use two images with dots. Here are the two images.

image descriptionhttp://www.bilder-upload.eu/show.php?...

image descriptionhttp://www.bilder-upload.eu/show.php?...

I know the horizontal value (X) and vertical value (Y) of each points in the two images.

I would like to compute the relative position and rotation of the two cameras.

I tested with Blender if that's possible. With motion tracking Blender was able to compute the relative position and rotation. It takes 8 points or more. The result is very exact.

Here is my Blender test 3D View.

image descriptionhttp://www.bilder-upload.eu/show.php?...

I found many openCv functions. But I do not know which function's I need. stereoCalibrate ? findFundamentalMat?

Which function's I need? In what order?

edit retag flag offensive close merge delete

4 answers

Sort by » oldest newest most voted
1

answered 2015-06-19 02:32:25 -0600

Do you have the (relative) 3d position of your points? Or only the projections into the two images? If you have them, have a look at solvePNP.

edit flag offensive delete link more
0

answered 2015-07-06 11:03:48 -0600

Eduardo gravatar image

updated 2015-07-06 11:28:34 -0600

I answer to the post of @Nepolo:

UPDATE 2015.07.06 -----UPDATE 2015.07.06------- UPDATE 2015.07.06

Let alpha, beta and gamma the yaw, pitch and roll angles as defined here (slide 50, Bernard BAYLE), also called Tait-Bryan angles Z1Y2X3:

Yaw Pitch Roll

The final rotation matrix can be calculated as the the matrix resulting of the multiplication of the 3 rotation matrices about each axis:

image description

image description

Your resulting rotation matrix is decomposed as:

  cout <<  Qx.t()<<endl;
  //RESULT
  //[1, 0, 0;
  // 0, 0.9999999980133975, 6.303336619882133e-05;
  // 0, -6.303336619882133e-05, 0.9999999980133975]


  cout <<  Qy.t()<<endl;
  //RESULT
  //[0.9999999997416501, 0, 2.273102883446428e-05;
  //0, 1, 0;
  //-2.273102883446428e-05, 0, 0.9999999997416501]


  cout <<  Qz.t()<<endl;
  //RESULT
  //[0.985944918764345, -0.1670706951047037, 0;
  //0.1670706951047037, 0.985944918764345, 0;
  //0, 0, 1]

We have Qx == identity (more or less), Qy == identity (more or less) and alpha_angle=atan2(0.1670706951047037, 0.985944918764345) but you cannot end up with:

That should be the correct values Roll 5° Pitch 0° Yaw -45°

Some test code:

#include <iostream>
#include <opencv2/opencv.hpp>

cv::Mat rotationMatrixX(const double angle) {
  cv::Mat Rx = (cv::Mat_<double>(3, 3) << 1.0, 0.0, 0.0,
                                          0.0, cos(angle), -sin(angle),
                                          0.0, sin(angle), cos(angle));
  return Rx;
}

cv::Mat rotationMatrixY(const double angle) {
  cv::Mat Ry = (cv::Mat_<double>(3, 3) << cos(angle), 0.0, sin(angle),
                                          0.0, 1.0, 0.0,
                                          -sin(angle), 0.0, cos(angle));
  return Ry;
}

cv::Mat rotationMatrixZ(const double angle) {
  cv::Mat Rz = (cv::Mat_<double>(3, 3) << cos(angle), -sin(angle), 0.0,
                                          sin(angle), cos(angle), 0.0,
                                          0.0, 0.0, 1.0);
  return Rz;
}

cv::Mat yawPitchRollToRotationMatrix(const double roll, const double pitch, const double yaw) {
  cv::Mat Rx = rotationMatrixX(roll);
  cv::Mat Ry = rotationMatrixY(pitch);
  cv::Mat Rz = rotationMatrixZ(yaw);

  return Rz*Ry*Rx;
}

double rad2deg(const double rad) {
  return rad*180.0/CV_PI;
}

double deg2rad(const double deg) {
  return deg*CV_PI/180.0;
}

int main(int argc, char **argv) {
  double roll = deg2rad(5.0);
  double pitch = deg2rad(0.0);
  double yaw = deg2rad(-45.0);

  cv::Mat R = yawPitchRollToRotationMatrix(roll, pitch, yaw);
  std::cout << "R=" << R << std::endl;

  cv::Mat mtxR,mtxQ;
  cv::Mat Qx,Qy,Qz;
  cv::Vec3d e;

  e= RQDecomp3x3(R, mtxR,mtxQ,Qx, Qy, Qz);
  std::cout << "Euler angles (Roll-Pitch yaw angles) in degree=" << e << std::endl;

  return 0;
}

and the corresponding output:

R=[0.7071067811865476, 0.7044160264027586, -0.06162841671621935;
  -0.7071067811865475, 0.7044160264027587, -0.06162841671621935;
  0, 0.08715574274765817, 0.9961946980917455]
Euler angles (Roll-Pitch-Yaw angles) in degree=[5, 0, -45]
edit flag offensive delete link more

Comments

Thank you very much. After long time I could continue testing the code. That was a great help.

Nepolo gravatar imageNepolo ( 2015-08-20 16:22:41 -0600 )edit
0

answered 2015-06-20 11:14:11 -0600

Nepolo gravatar image

updated 2015-07-06 08:14:03 -0600

Hi, thanks for the answers.

Do you have the (relative) 3d position of your points.

I don’t know the (relative) 3d position of the points. I know the two images, the focal length and the sensor size of the cameras.

I tested with Blender if that's possible. Blender motion tracking can compute the relative position and rotation of the two cameras. Blender needs 8 points in both images. The result is very exact.

It is possible. But how?

I have found the openCv function findFundamentalMat.

findFundamentalMat also requires min 8 points* in both images. This is the same rule as in Blender.

*CV_FM_8POINT for an 8-point algorithm. N>=8

CV_FM_RANSAC for the RANSAC algorithm. N>=8

CV_FM_LMEDS for the LMedS algorithm. N>=8

And I found the function stereoCalibrate.

UPDATE 2015.06.29 -----UPDATE 2015.06.29------- UPDATE 2015.06.29

Hi. thanks for the comments.

Why 8 Points? As LBerger said, every pair of points gives you a constraint in the form of p2'Fp1 = 0. Why you need 8 points is not that obvious.

The animation programm Blender can compute the points and position of the camera. Blender say that you needs min 8 points.

I take a look at the function computeCorrespondEpilines. Here is the result. Image 2 looks like very good.

Image 1

image description

Image 2

image description

I have copied the image into Blender.

Here is the result. image description

But what happens next? How can I get the following values? X, Y, Z, Roll, Pitch and Yaw

image description

Here is my code.

#include <QCoreApplication>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;


Mat image1;
Mat image2;
Mat  F;

int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    // Open image 1
    QString fileName = "cam1c.jpg";   
   image1= imread(fileName.toAscii().data());

   // Open image 2
   QString fileName2 = "cam2c.jpg";
  image2= imread(fileName2.toAscii().data());



  // this are the points
  // I added the point values manually (it is a test)
  vector<Point2f> image1_points;
  vector<Point2f> image2_points;
  image1_points.push_back(Point(403.83,299.63));
  image2_points.push_back(Point(401.38 ,300.03));

  image1_points.push_back(Point(311.5,388.5));
  image2_points.push_back(Point(310.45 ,378.28));

  image1_points.push_back(Point(741.9,72.08));
  image2_points.push_back(Point(567.58 ,160.20));

  image1_points.push_back(Point(488.45,211.58));
  image2_points.push_back(Point(397.43 ,237.73));

  image1_points.push_back(Point(250.6,200.43));
  image2_points.push_back(Point(314.95 ,229.7));

  image1_points.push_back(Point(171,529.08));
  image2_points.push_back(Point(359.9 ,477.5));

  image1_points.push_back(Point(400,227.75));
  image2_points.push_back(Point(272.78 ,251.90));

  image1_points.push_back(Point(513.95,414));
  image2_points.push_back(Point(508.15 ,361.03));

  image1_points.push_back(Point(280.68,140.9));
  image2_points.push_back(Point(223.55 ,178.93));

  image1_points.push_back(Point(479.58,220.48));
  image2_points.push_back(Point(355.98 ,244.63));

  image1_points.push_back(Point(621.95,122.48));
  image2_points.push_back(Point(454.78 ...
(more)
edit flag offensive delete link more

Comments

If in first image a point is at coordinate p1 and in second image at p2 then you have p2'Fp1 where F is fondamental Matrix. F give you epipolar line.

LBerger gravatar imageLBerger ( 2015-06-20 14:23:21 -0600 )edit

Why 8 Points? As LBerger said, every pair of points gives you a constraint in the form of p2'Fp1 = 0. Why you need 8 points is not that obvious. But https://en.wikipedia.org/wiki/Eight-p... and this nice song: https://www.youtube.com/watch?v=DgGV3... help a bit.

FooBar gravatar imageFooBar ( 2015-06-22 01:37:24 -0600 )edit
1

I think that you need to compute first the Essential matrix from the Fundamental matrix as: E = transpose(K')FK, with K and K' the intrinsic matrix.

After, you have to extract the camera motion from the essential matrix, and you can look at these links:

Eduardo gravatar imageEduardo ( 2015-06-29 12:12:23 -0600 )edit
0

answered 2015-06-19 07:55:37 -0600

LBerger gravatar image

updated 2015-06-19 07:56:29 -0600

Using @FooBar answer my program is and I hope it's good.

#include <opencv2/opencv.hpp> 
#include <iostream>
#include <map>
#include <fstream>

using namespace cv;
using namespace std;

double PI = acos(-1.0);

int main(int argc, char** argv)
{
    RNG rng(0xFFFFFFFF);
    vector< Mat > point(8);
    vector< Point3d > p3d(point.size()); // Real point in world coordinate
    vector< Point2d > p2d(point.size()); // point in image coordinate p2d=A[R|t]p3d  http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html

// Real point randomly place in cube center at (10.5,10.5,10.5) with width 1
    for (int i = 0; i < point.size(); i++)
    {


    Mat_<double> v(4, 1);
    v(0, 0) = 10 + rng.uniform((double)0, (double)1);
    v(1, 0) = 10 + rng.uniform((double)0, (double)1);
    v(2, 0) = 10 + rng.uniform((double)0, (double)1);
    v(3,0) = 1;
    point[i] = v;
    p3d[i].x = v(0, 0);
    p3d[i].y = v(1, 0);
    p3d[i].z = v(2, 0);
}
Mat_<double> mInt(3, 3);
// Intrinsic parameter of webcam c270
mInt(0, 0) = 4520; mInt(0, 1) = 0; mInt(0, 2) = 600;
mInt(1, 0) = 0; mInt(1, 1) = 5490; mInt(1, 2) = 440;
mInt(2, 0) = 0; mInt(2, 1) = 0; mInt(2, 2) = 1;
// Camera rotation  (1.2rd,0.9rd,0.7rd)
Mat_<double> angle(3, 1);
angle(0, 0) = 1.2; angle(1, 0) = 0.9; angle(2, 0) = 0.7;
Mat r;
Mat rt = Mat::eye(3, 4, CV_64F);
// Camera translation  (2.4,4.8,0)
rt.at<double>(0, 3) = 2.4;
rt.at<double>(1, 3) = 4.8;
rt.at<double>(2, 3) = 0;
// make matrix rotation
Rodrigues(angle, r);

for (int i = 0; i < 3; i++)
    for (int j = 0; j < 3; j++)
        rt.at<double>(i, j) = r.at<double>(i, j);

Mat art;
// Mat product intrinsic and extrinsic parameter
gemm(mInt,rt,1,Mat(),0,art);
cout << art<<endl;
vector< Mat > pImg(point.size());
for (int i = 0; i < point.size(); i++)
{
    gemm(art, point[i], 1, Mat(), 0, pImg[i]);
    // Homogenous coordinate
    pImg[i]=pImg[i] / pImg[i].at<double>(2, 0);
    p2d[i] = Point2d(pImg[i].at<double>(0, 0), pImg[i].at<double>(1, 0));
    cout << point[i] << "\t" << pImg[i]  << endl;
}
// p3d real point- p2d image point what are values of camera orientation and translation 
Mat rvec, tvec;
solvePnP(p3d, p2d, mInt, Mat(), rvec, tvec);
cout << "Rvec camera = " << rvec << endl;
cout << "Tvec camera= " << tvec << endl;
return 0;

}

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2015-06-18 10:49:46 -0600

Seen: 2,979 times

Last updated: Jul 06 '15