Ask Your Question
0

cv::triangulatePoints give strange results

asked 2016-12-03 17:20:56 -0600

never-ever gravatar image

updated 2016-12-05 16:27:25 -0600

Hi,
I have images from two cameras (left and right). First I calibrate cameras and I get cameras matrices, distortion coefficients and projection matrices.
On images I detect marker and save it positions on left and right image. Next I use cv::triangulatePoints but I get really strange result (I use other library that also allows to get that 3D coords and it shows more likely results).

Here is my code:

int size0 = m_history.getSize(0);
int size1 = m_history.getSize(1);

if(size0 != size1)
{
    setInfo(tr("Cannot calculate triangulation: number of marker positions is different on left and right camera"));
    return false;
}
if(size0 <= 0)
{
    setInfo(tr("No marker postion saved. Cannot calc triangulation."));
    return false;
 }

cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);

for(int i = 0; i < size0; i++)
{
    cv::Point2f pt1 = m_history.getOriginalPoint(0, i);   //before edit, here was cv::Point not cv::Point2f
    cv::Point2f pt2 = m_history.getOriginalPoint(1, i);   //before edit, here was cv::Point not cv::Point2f

    pointsMat1.at<double>(0,0) = pt1.y;
    pointsMat1.at<double>(1,0) = pt1.y;
    pointsMat2.at<double>(0,0) = pt2.x;
    pointsMat2.at<double>(1,0) = pt2.y;

    cv::Mat pnts3D(4, 1, CV_32F);

    cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);

    CvPoint3D64f point3D;
    point3D.x = pnts3D.at<double>(0, 0);
    point3D.y = pnts3D.at<double>(1, 0);
    point3D.z = pnts3D.at<double>(2, 0);

    point3D.x = point3D.x/pnts3D.at<double>(3, 0);
    point3D.y = point3D.y/pnts3D.at<double>(3, 0);
    point3D.z = point3D.z/pnts3D.at<double>(3, 0);
    m_history.addTriangulatedPoint(point3D);
}

Where I do mistake?

Results from triangulation:

 3D_X              3D_Y              3D_Z

1.20151e+24    7.70359e+23     4.41239e+24
-2.23198e+23    -3.11741e+22    -7.98741e+23
3.79697e+22    -1.45599e+22    1.31405e+23
-7.27922e+22    6.40761e+22    -2.45849e+23
7.11722e+22    -9.67023e+22    2.32894e+23
2.14094e+22    -2.26098e+22    7.11648e+22
4.95297e+22    -2.77331e+22    1.70013e+23
2.21597e+23    -1.22193e+22    7.79736e+23
-3.94602e+23    -1.85898e+23    -1.43658e+24
2.56816e+24    2.13037e+24     1.22021e+25

And from other library (it seems quite correct):

  3D_X              3D_Y              3D_Z

8.33399      4.74962       62.6119
8.33243       -1.62743       60.8669
8.40008       -8.43666       59.5643
8.47287       -15.1735       58.3184
8.52143       -21.964       57.0896
8.4673       -17.6672       57.6249
8.31039       -10.9452       58.6376
8.39497       -4.27203       60.6766
8.37444       2.56077       62.3287
5.41351       4.73769       62.7424

My projection matrices:

P1 = 
    4.484533e+02         0             3.073146e+02    0 
           0           4.484533e+02    2.473914e+02    0 
           0                  0             1          0

P2 = 
    4.484533e+02         0             3.073146e+02     -8.275082e+02 
           0           4.484533e+02    2.473914e+02          0 
           0                  0              1               0

EDIT:

After correct cv::Point to cv::Point2f and with cv::undistortPoints I get that: (I pass to cv::undistortPoints only ... (more)

edit retag flag offensive close merge delete

Comments

Question. Is one of your cameras the origin of the coordinate system or do both of the camera translations have values?

Tetragramm gravatar imageTetragramm ( 2016-12-04 16:28:13 -0600 )edit

Please explain more precisely what you ask, because I do not quite understand (maybe because of not perfect knowledge of English)

never-ever gravatar imagenever-ever ( 2016-12-04 16:38:15 -0600 )edit

When you create the projection matrix, you create it from camera matrix, rvec, and tvec. What are the values of the rvec and tvec for each camera?

If you need to, you can use the decomposeProjectionMatrix function to get the values.

Tetragramm gravatar imageTetragramm ( 2016-12-04 16:53:30 -0600 )edit

After cameras calibration I use stereoRectify to obtain projection matrices. In a moment I will update my post with projection matrices values

never-ever gravatar imagenever-ever ( 2016-12-04 16:57:22 -0600 )edit

Those values look correct. Could you include the original 2d pixel values for the first point please?

Tetragramm gravatar imageTetragramm ( 2016-12-04 17:13:32 -0600 )edit

From first image is: 432.921x321.005 and for second is 432.918x321.005

never-ever gravatar imagenever-ever ( 2016-12-04 17:39:41 -0600 )edit

Is the other library using the information you get from OpenCV or is it calculating it's own?

I'm using two different methods to get two very large numbers. Are you sure those pixel values are correct? The difference is awfully small.

Lastly, you have a typo. pointsMat1 is filled with pt1.y twice.

Tetragramm gravatar imageTetragramm ( 2016-12-04 18:29:56 -0600 )edit

Other library uses only: camera matrix, distortion coeffs, marker size that I set before start calculation. Nothing else. I correct that typo and update values of triangulation (now I get very large values from triangulation)

never-ever gravatar imagenever-ever ( 2016-12-04 18:36:42 -0600 )edit

Are you undistorting the points or the images before you pass them to triangulate? If not, use THIS function and try that.

Tetragramm gravatar imageTetragramm ( 2016-12-04 18:52:50 -0600 )edit

First I corrected my mistake (I change type of original points pt1 and pt2 from cv::Point to cv::Point2f). And I use now undistortPoints before saving to m_history from where I get points to triangulate method. I update post and add results after undistortPoints and triangulation.

never-ever gravatar imagenever-ever ( 2016-12-04 19:16:43 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-12-05 19:48:09 -0600

never-ever gravatar image

So I write the most important part of my code (not all because program is large but I think it will be enough)

Images
From left: left images
From right: right images

Intrinsic and extrinsic parameteres for left

cameraMat1 = 
    5.327949e+02      0        3.089701e+02 
         0       5.333043e+02  2.523206e+02 
         0            0            1 

distCoeffs1 = 
    8.571497e-02   
    -2.059629e-01 
    5.049960e-03 
    -3.563517e-03

rotation1 = 
     9.986447e-01    5.134855e-02     -8.494029e-03 
    -5.165607e-02    9.978162e-01     -4.116318e-02 
     6.361810e-03    4.154616e-02      9.991163e-01

projMat1 = 
     4.204745e+02        0         3.185035e+02   0 
           0        4.204745e+02   2.475893e+02   0 
           0             0              1         0

Intrinsic and extrinsic parameteres for right

cameraMat2 = 
    5.339174e+02      0        3.001284e+02 
         0       5.343352e+02  2.333189e+02 
         0            0            1 

distCoeffs2 = 
    7.593528e-02   
    -1.829142e-01 
    7.050366e-03 
    -4.569196e-03

rotation2 = 
     9.976312e-01    5.243198e-02     -4.452930e-02 
    -5.054383e-02    9.978163e-01     4.251988e-02 
     4.666146e-02   -4.016848e-02     9.981028e-01

projMat2 = 
     4.204745e+02        0         3.185035e+02   -2.708721e+03 
           0        4.204745e+02   2.475893e+02      0 
           0             0              1            0

1. Here is startOffline() method that starts all calculation:

void TrackSystem::startOffline()
{
  try
  {
    saveData(true); //it only sets up boolean variables to know what type of data should be saved

    for(int i = 0; i < m_imageList1.size(); i++)
    {
      cv::Mat* tempMat1 = &m_imageList1[i];
      cv::Mat* tempMat2 = &m_imageList2[i];

      IplImage* image1 = cvCloneImage(&(IplImage)(*tempMat1));
      IplImage* image2 = cvCloneImage(&(IplImage)(*tempMat2));

      stereovideocallback(image1, image2);  //see 2.
    }

    calcTriangulation2TEST(); //see first post
    saveData(false);
  }
  catch(const std::exception &e)
 {
    qDebug()<<"Exception: " + QString(e.what());
 }
 catch(...){}
}

2. Here is stereovideocallback() method that allows to detect markers and save data to m_history.

void TrackSystem::stereovideocallback(IplImage* image1, IplImage* image2)
{
  static IplImage *rgba;
  bool flip_image1 = (image1->origin?true:false);
  if(flip_image1)
  {
    cvFlip(image1);
    image1->origin = !image1->origin;
  }

  bool flip_image2 = (image2->origin?true:false);
  if(flip_image2)
  {
    cvFlip(image2);
    image2->origin = !image2->origin;
  }

  // Loading camera calibration
  if(m_init)
  {
    bool calibLoaded1 = loadCalibration(image1, true); //see 3.
    bool calibLoaded2 = loadCalibration(image2, false);
    if(!calibLoaded1 || !calibLoaded2)
      emit calibLoadFailure();
  }

  // Finding and processing markers 
  static MarkerDetector<MarkerData> marker_detector1; //Alvar library
  static MarkerDetector<MarkerData> marker_detector2;

  marker_detector1.SetMarkerSize(m_markerSize);
  marker_detector2.SetMarkerSize(m_markerSize);

  QString timeNow;
  if(m_isDataSaved)
  {
    timeNow = m_timeObj->getTimeString();
  }

  bool found1 = processMarkers(&marker_detector1, image1, 0); // see 4.
  bool found2 = processMarkers(&marker_detector2, image2, 1);

  if(found1 && found2)
  {
    calcStereoMarkers(image1, 0, &marker_detector1, timeNow); // see 5.
    calcStereoMarkers(image2, 1, &marker_detector2, timeNow);
  }

  if(m_init)
    m_init = false;

  if(flip_image1)
  {
    cvFlip(image1);
    image1->origin = !image1->origin;
  }

  if(flip_image2)
  {
    cvFlip(image2);
    image2->origin = !image2->origin;
  }
}

3. Here is loadCalibration() function and readCameraParameters()

bool Tracksystem::loadCalibration(IplImage *image, bool firstCamera)
{
  bool isFileOpened = readCameraParameters(firstCamera);

  QString tempFile;
  alvar::Camera* cam;

  if(firstCamera)
  {
    tempFile = m_calibrationFile1;
    cam = &m_cam1;
  }
  else
  {
    tempFile = m_calibrationFile2;
    cam = &m_cam2;
  }

  bool calibSetted = cam->SetCalib(tempFile.toStdString().c_str(), image->width, image->height);

  if(!calibSetted)
    cam->SetRes(image->width, image->height);

  return calibSetted ;
}


bool TrackSystem::readCameraParameters(bool firstCam)
{
  QString tempFile;
  Mat* tempMat;
  Mat* tempCoeffs;
  Mat* tempProj;
  Mat* tempTransl;
  Mat* tempRot;

  if(firstCam)
  {
    tempFile = m_calibrationFile1;
    tempMat = &m_cameraMatrix1;
    tempCoeffs = &m_distortionCoeffs1;
    tempProj = &m_projectionMat1;
    tempRot = &m_rotationMat1;
    tempTransl = &m_translationMat1;
  }
  else
  {
    tempFile = m_calibrationFile2 ...
(more)
edit flag offensive delete link more

Comments

Those rotations do not match the projection matrix. The projection matrixes both have an identity rotation matrix. Why the difference?

The camera matrix is different too. Did you use a different method of calculating them?

And the distortion matrix should be 5x1, not 4x1.

And those are different projection matrices than your first post.

The cameras aren't aligned horizontally or vertically. I'm not even sure if the stereo algorithm works if they aren't. Either way, that definitely means the projection matrices are wrong. They show just a horizontal difference, but there's definitely a vertical/rotational one too.

Tetragramm gravatar imageTetragramm ( 2016-12-05 21:16:06 -0600 )edit

Ok. I will check it again. Maybe I copy sth wrong and if so I will correct post.

never-ever gravatar imagenever-ever ( 2016-12-06 17:13:07 -0600 )edit

Honestly, start over. Get a chessboard and calibrate each camera individually. Then cover the overlaps and do stereo calibrate using what you calibrated earlier as inputs.

Tetragramm gravatar imageTetragramm ( 2016-12-06 18:19:41 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-12-03 17:20:56 -0600

Seen: 3,538 times

Last updated: Dec 05 '16