OpenCv SolvePnp result, to Eigen Quaternion

asked 2018-01-20 07:19:41 -0600

antithing gravatar image

updated 2018-01-22 13:44:19 -0600

I am running opencv solvePnP using 3d points that I am passing in manually, and 2d image points from the corner of an aruco marker.

my 4 x 2d, and 4 x 3d points are as follows:

2d: [631.811, 359.303]
2d: [750.397, 364.894]
2d: [761.65, 448.106]
2d: [621.837, 440.352]

3d: [2.73788, -2.19532, 119.784]
3d: [-18.8274, -3.25251, 119.167]
3d: [-17.7176, -15.5556, 101.448]
3d: [3.84767, -14.4984, 102.065]

my camera matrix is:

[657.0030186215286, 0, 646.6970647943629;
 0, 657.0030186215286, 347.1175117491306;
 0, 0, 1]

When I run this, I get rvec and tvec:

rot: [3.542907682527767e-08;
 9.321253763995684e-10;
 3.141592612624269]



tran: [-2.599387317758439e-06;
 7.428777431217353e-07;
 -1.306287668720693e-06]

Already, this looks strange. Please see the image attached for an example of where the marker is in relation to the camera. Then, i run the rvec and tvec through a function to get the camera pose in relation to the marker as an Eigen::quaterniond

void GetCameraPoseEigen(cv::Vec3d tvecV, cv::Vec3d rvecV, Eigen::Vector3d &Translate, Eigen::Quaterniond &quats)
{
    Mat R;
    Mat tvec, rvec;

    tvec = DoubleMatFromVec3b(tvecV);
    rvec = DoubleMatFromVec3b(rvecV);

    cv::Rodrigues(rvec, R); // R is 3x3
    R = R.t();                 // rotation of inverse
    tvec = -R*tvec;           // translation of inverse

    Eigen::Matrix3d mat;
    cv2eigen(R, mat);

    Eigen::Quaterniond EigenQuat(mat);

    quats = EigenQuat;


    double x_t = tvec.at<double>(0, 0);
    double y_t = tvec.at<double>(1, 0);
    double z_t = tvec.at<double>(2, 0);

    Translate.x() = x_t * 10;
    Translate.y() = y_t * 10;
    Translate.z() = z_t * 10;   

}

The resulting quaternion is:

1.12774e-08 2.96705e-10 1 -2.04828e-08

Which is obviously wrong. But where is my mistake? Do my 2d - 3d correspondences look correct? Or is it my Eigen conversion?

Thank you.

image description

EDIT:

I have adjusted the getCorners function based on the helpful comments below.

std::vector<cv::Point3f> getCornersInWorld(double side, cv::Vec3d rvec, cv::Vec3d tvec) {

    //marker half size
    double half_side = side / 2;

    // compute rot_mat
    cv::Mat rot_mat;
    cv::Rodrigues(rvec, rot_mat);

    // transpose of rot_mat
    cv::Mat rot_mat_t = rot_mat.t();

    //points in marker space
    cv::Point3d cnr1(0, -half_side, half_side);
    cv::Point3d cnr2(0, half_side, half_side);
    cv::Point3d cnr3(0, half_side, -half_side);
    cv::Point3d cnr4(0, -half_side, -half_side);

    //to mat
    cv::Mat cnr1Mat(cnr1);
    cv::Mat cnr2Mat(cnr2);
    cv::Mat cnr3Mat(cnr3);
    cv::Mat cnr4Mat(cnr4);

    //rotate points
    cv::Mat pt_newMat1 = (-rot_mat_t * cnr1Mat);
    cv::Mat pt_newMat2 = (-rot_mat_t * cnr2Mat);
    cv::Mat pt_newMat3 = (-rot_mat_t * cnr3Mat);
    cv::Mat pt_newMat4 = (-rot_mat_t * cnr4Mat);

    // convert tvec to point
    cv::Point3d tvec_3d(tvec[0], tvec[1], tvec[2]);
    cv::Mat tvecMat(tvec_3d);

    //rotate tvec
    cv::Mat tvec_new = (-rot_mat_t * tvecMat);
    cv::Point3d p(tvec_new);

    //subtract tvec from all
    pt_newMat1 = pt_newMat1 - tvec_new;
    pt_newMat2 = pt_newMat2 - tvec_new;
    pt_newMat3 = pt_newMat3 - tvec_new;
    pt_newMat4 = pt_newMat4 - tvec_new;


    //back to pnt
    cv::Point3d p1(pt_newMat1);
    cv::Point3d p2(pt_newMat2);
    cv::Point3d p3(pt_newMat3);
    cv::Point3d p4(pt_newMat4);

    std::vector<cv::Point3f> retPnts;
    retPnts.push_back(p1);
    retPnts.push_back(p2);
    retPnts.push_back(p3);
    retPnts.push_back(p4);

    return retPnts;
}

This gives me:

2d: [640.669, 489.541]
2d: [746 ...
(more)
edit retag flag offensive close merge delete

Comments

How did you get those 3d points for the solvePnP? They aren't simple enough to draw conclusions from anything. Try setting them to z = 0, (x,y) whatever the offset is from the center of the marker to the corners.

Tetragramm gravatar imageTetragramm ( 2018-01-20 20:12:40 -0600 )edit

Hi, thanks for your reply. I am getting the 3d points by running solvePnP once (using the aruco module), then extrapolating the 3d points in world space from the marker pose. The reason for this is that I need to have multiple markers in frame, randomly placed, and run solvePnP on them.

When I add the points manually, for this one marker, like this:

cv::Point3d cnr1(0, -10, 10);
    cv::Point3d cnr2(0, 10, 10);
    cv::Point3d cnr3(0, 10, -10);
    cv::Point3d cnr4(0, -10, -10);

I get:

rot: [0.4002261041215546;
 0.4615760012941126;
 -1.49600332980762]
tran: [6.236101764494571;
 15.08829724022298;
 101.9666936722527]

-0.17904 -0.206485 0.669233 0.690968

which looks correct. So I must have my 3d points wrong? Or have them in the wrong order?

antithing gravatar imageantithing ( 2018-01-21 05:20:53 -0600 )edit

std::vector<cv::point3f> getCornersInCameraWorld(double side, cv::Vec3d rvec, cv::Vec3d tvec) {

        double half_side = markerLength / 2;
        cv::Mat rot_mat;
        cv::Rodrigues(rvec, rot_mat);

        cv::Mat rot_mat_t = rot_mat.t();

        double * tmp = rot_mat_t.ptr<double>(0);
        cv::Point3f camWorldE(tmp[0] * half_side,
            tmp[1] * half_side,
            tmp[2] * half_side);

        tmp = rot_mat_t.ptr<double>(1);
        cv::Point3f camWorldF(tmp[0] * half_side,
            tmp[1] * half_side,
            tmp[2] * half_side);

        // convert tvec to point
        cv::Point3f tvec_3f(tvec[0], tvec[1], tvec[2]);

    // return vector:
        std::vector<cv::Point3f> ret(4, tvec_3f);
        std::vector<cv::Point3f> ret2(4);

        ret[0] += camWorldE + camWorldF;
        ret[1] += -camWorldE + camWorldF;
        ret[2] += -camWorldE - camWorldF;
        ret[3] += camWorldE - camWorldF;
     return ret;
      }
antithing gravatar imageantithing ( 2018-01-21 05:23:35 -0600 )edit

that is my function to get the world space corners from rvec and tvec. Thanks again!

antithing gravatar imageantithing ( 2018-01-21 05:24:49 -0600 )edit
1

I can't quite follow what's going on, but it doesn't look right. You definitely need all the information in the rotation matrix.

First, invert the rotation matrix

Mat rot_mat_t = rot_mat.t();

Second, rotate all the points, including the tvec.

pt_new = (-rot_mat_t * pt_old)

Third, subtract the tvec from everything, which leaves the camera center at (0,0,0), and everything else positioned appropriately.

Tetragramm gravatar imageTetragramm ( 2018-01-21 16:47:55 -0600 )edit

Thank you! To clarify, pt_old, above, is the pt (0, - halfmarkersize, halfmarkersize) etc? So basically, set the points as if the middle of the marker is 0, rotate them all with the rotation matrix, and subtract tvec. Is that right?

antithing gravatar imageantithing ( 2018-01-21 17:07:39 -0600 )edit

All the points you are interested in. You also need to do the same with tvec, and what you subtract is the tvec_new.

Tetragramm gravatar imageTetragramm ( 2018-01-21 17:15:58 -0600 )edit

I have updated my code, and added to the question above. It looks better, but still not correct. If you have a moment, could you take a look over the new function please? Thank you again for your time!

antithing gravatar imageantithing ( 2018-01-22 13:45:04 -0600 )edit
1

Well, if the camera is (0,0,0), the marker points should all have a +z value. So if they're negative, something is wrong. I'll try to take a look later, depending on how much I get done.

Tetragramm gravatar imageTetragramm ( 2018-01-23 18:05:46 -0600 )edit

Thank you again, I am still stuck on this.

antithing gravatar imageantithing ( 2018-01-24 09:31:24 -0600 )edit