Triangulating points into world-space, and using in solvePnp.

asked 2017-07-21 12:59:29 -0600

antithing gravatar image

updated 2017-07-28 14:45:32 -0600

I am triangulating points from a stereo system, then using these points in solvePnPRansac. When I reproject the points, they look good, but the tvec and rvec values returned are incorrect.

Having checked the concurrent sets of triangulated points in a 3d application, the frame - to - frame point clouds move, and the motion I expect on the camera seems to be on the point clouds instead. I have functions to update the projection matrices as I go, which should put the points in the same world position, but doesn't. I am doing something wrong. Any advice is much appreciated.

Every frame, I am:

projectedPoints = CvPnpRansac(keyPntsGood, objectPointsTri, cameraMatrix, distCoeffs, rvec, tvec);



upDatePMatrix(rvec, tvec, P1, P2);
std::vector<cv::Point3f> objectPointsTri = triangulateCv(keyPntsGoodL, keyPntsGoodR, P1, P2);

The functions above are:

cv::Mat computeProjMat(cv::Mat camMat, cv::Mat rotVec, cv::Mat transVec)
{
    cv::Mat rotMat(3, 3, CV_64F), rotTransMat(3, 4, CV_64F); //Init.
     //Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec, rotMat);
    //flip to camera pose
    rotMat = rotMat.t();  // rotation of inverse
    transVec = rotMat.t() * transVec; // translation of inverse

                                      //Append translation vector to rotation matrix
    cv::hconcat(rotMat, transVec, rotTransMat);
    //Compute projection matrix by multiplying intrinsic parameter 
    //matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
    //Formula: Projection Matrix = A * RT;
    return (camMat * rotTransMat);
}

void upDatePMatrix(cv::Mat rvec, cv::Mat tvec, cv::Mat &P1return, cv::Mat &P2return)
{

    //extrinsics 
    float tx = 0.118; // STEREO BASELINE, to offset the right cam from the left.
    float ty = 0;
    float tz = 0;


    cv::Mat tvec2 = cv::Mat(tvec.rows, tvec.cols, tvec.type());

    tvec2.at<double>(0) = tvec.at<double>(0) + tx;
    tvec2.at<double>(1) = tvec.at<double>(1);
    tvec2.at<double>(2) = tvec.at<double>(2);


    cv::Mat P1 = computeProjMat(cameraMatrix, rvec, tvec);
    cv::Mat P2 = computeProjMat(cameraMatrix, rvec, tvec2);


    P1.copyTo(P1return);
    P2.copyTo(P2return);


}


std::vector<cv::Point2f> CvPnpRansac(std::vector<cv::KeyPoint> imagePoints, std::vector<cv::Point3f> objectPoints, cv::Mat cameraMatrix, cv::Mat distCoeffs, cv::Mat &rvecRturn, cv::Mat &tvecRTurn)
{

    std::vector<cv::Point2f> points2d;
    std::vector<cv::Point3f> points3d;
    for (int i = 0; i < objectPoints.size(); i++)
    {
        cv::Point2f tmp;
        tmp.x = imagePoints[i].pt.x;
        tmp.y = imagePoints[i].pt.y;
        points2d.push_back(tmp);
        points3d.push_back(objectPoints[i]);
    }


    bool useExtrinsicGuess = false;
    int iterationsCount = 200;
    float reprojectionError = 1.0f;
    double confidence = 0.9;

    bool success = cv::solvePnPRansac(points3d, points2d, cameraMatrix, distCoeffs, rvec, tvec, useExtrinsicGuess, iterationsCount, reprojectionError, confidence, cv::noArray(), cv::SOLVEPNP_ITERATIVE);


    std::vector<cv::Point2f> projectedPoints;
    cv::projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, projectedPoints);


    rvec.copyTo(rvecRturn);
    tvec.copyTo(tvecRTurn);


    return projectedPoints;

}





std::vector<cv::Point3f> triangulateCv(std::vector<cv::KeyPoint> keyPntsLeft, std::vector<cv::KeyPoint> keyPntsRight, cv::Mat P1, cv::Mat P2)
{
    std::vector<cv::Point3f> p3d;

    std::vector<cv::Point2f> keysLeft;
    std::vector<cv::KeyPoint> keyPntsTri;
    std::vector<cv::Point2f> keysRight;

    for (int i = 0; i < keyPntsLeft.size(); i++)
    {
        cv::Mat x3D;

        cv::Point2f point1 = keyPntsLeft[i].pt;
        cv::Point2f point2 = keyPntsRight[i].pt;

        keysLeft.push_back(point1);
        keysRight.push_back(point2 ...
(more)
edit retag flag offensive close merge delete

Comments

Ok, your updatePMatrix is wrong (I think). Instead of changing the P matrix, keep what you calibrated. Instead you should just transform the points at the end of triangulateCv. Just use the rvec and tvec from solvePnP directly.

Sorry, life happened. I might have some time this weekend to do a little demo thing to test this in.

Tetragramm gravatar imageTetragramm ( 2017-07-26 18:00:14 -0600 )edit

Thank you yet again. I have attempted this approach, and updated the question...

antithing gravatar imageantithing ( 2017-07-28 15:41:33 -0600 )edit

I don't think you need the Tran.inv(), and the function cv::transform will do it to all the points in the list at once.

Tetragramm gravatar imageTetragramm ( 2017-07-28 17:03:27 -0600 )edit

So just * Tran? Am I right in using the homogenous 4*1 Mat?

antithing gravatar imageantithing ( 2017-07-28 17:23:06 -0600 )edit