Ask Your Question
0

Using solvePnP camera pose - object is offset from detected marker

asked 2017-01-15 08:03:51 -0600

drorza gravatar image

updated 2017-01-17 08:59:06 -0600

I have a problem in my iOS application where i attempt to obtain a view matrix using solvePnP and render a 3d cube using modern OpenGL. While my code attempts to render a 3d cube directly on top of the detected marker, it seems to render with a certain offset :

EDIT: it appears i am unable to attach images to my post due to my karma, links are in the first comment

(on the bottom right of the image you can see an opencv render of the homography around the tracker marker. and the 3 axis drawn using rvec and tvec. the rest of the screen is an opengl render of the camera input frame and a 3d cube at location (0,0,0).

the cube rotates and translates correctly whenever i move the real world marker, however it does not "sit" on top of the marker and seems to not have a constant world position. (see video in comments)

these are what i believe to be the relevant parts of the code where the error could come from :

Extracting view matrix from homography :

AVCaptureDevice *deviceInput = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *format = deviceInput.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);

static const float cx = float(dim.width) / 2.0;
static const float cy = float(dim.height) / 2.0;

static const float HFOV = format.videoFieldOfView;
static const float VFOV = ((HFOV)/cx)*cy;

static const float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
static const float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));


static const Mat camIntrinsic = (Mat_<double>(3,3) <<
                    fx, 0, cx,
                    0,  fy, cy,
                    0,  0, 1);

static const float objHalfSize = 0.5f;
Mat objPoints;
objPoints.create(4, 1, CV_32FC3);

objPoints.ptr< Vec3f >(0)[0] = Vec3f(-objHalfSize, +objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[1] = Vec3f(+objHalfSize, +objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[2] = Vec3f(+objHalfSize, -objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[3] = Vec3f(-objHalfSize, -objHalfSize, 0);

cv::Mat raux,taux;
cv::Mat Rvec, Tvec;
cv::Mat distCoeffs = Mat::zeros(5, 1, CV_64F);
cv::solvePnP(objPoints, mNewImageBounds, camIntrinsic, distCoeffs,raux,taux);
raux.convertTo(Rvec,CV_64F);
taux.convertTo(Tvec ,CV_64F);

DrawingUtility::drawAxis(image, camIntrinsic,Rvec, Tvec, 0.5f); //debug draw axes

cv::Mat rotation;
cv::Rodrigues(Rvec, rotation);

//compose a view matrix from camera extrinsic rotation and translation
cv::Mat viewMatrix = cv::Mat::zeros(4, 4, CV_64FC1);
for(unsigned int row=0; row<3; ++row)
{
    for(unsigned int col=0; col<3; ++col)
    {
        viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
    }

    viewMatrix.at<double>(row, 3) = Tvec.at<double>(row, 0);
}

viewMatrix.at<double>(3, 3) = 1.0f;

cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64FC1);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
viewMatrix = cvToGl * viewMatrix;

glm::mat4 V; //OpenGL view matrix
glm::mat4 P; //OpenGL ...
(more)
edit retag flag offensive close merge delete

Comments

http://i.imgur.com/qjsAYaU.jpg http://i.imgur.com/SCxtege.jpg

drorza gravatar imagedrorza ( 2017-01-15 08:26:28 -0600 )edit

You are using different scales in different parts of your application. That's generally a bad idea. Try setting the 4 points of your object3dPoints to the actual size of your marker.

Tetragramm gravatar imageTetragramm ( 2017-01-15 09:14:07 -0600 )edit

Not sure what you mean by using a different scale in different parts of the app. I tried playing around with the object3dPoints scale to fit the marker but it only seemed to affect the scale of my cube, but not its origin. is there a different place in my code where i wrongly refer to scale?

drorza gravatar imagedrorza ( 2017-01-15 09:17:56 -0600 )edit

added a video to help clarify what the issue looks like : https://www.youtube.com/watch?v=HhP5Qr3YyGI&feature=youtu.be (https://www.youtube.com/watch?v=HhP5Q...)

drorza gravatar imagedrorza ( 2017-01-15 09:31:51 -0600 )edit

Ok, simple debug step is to use the aruco::drawAxis and see if it works. You just pass in the rvec and tvec directly and it draws on the image. If that works perfectly, your problem is the conversion to OpenGL. If it doesn't, then your problem is probably the camera calibration. Either way, come back and let me know, and I'll try to help.

Tetragramm gravatar imageTetragramm ( 2017-01-15 17:11:19 -0600 )edit

Good idea, wasn't aware of this function :) The axes don't seem to be correct (http://imgur.com/KYwPK1B). They aren't pointing at the expected direction (as seem here for example: http://docs.opencv.org/trunk/singlemarkersaxis.png (http://docs.opencv.org/trunk/singlema...)), though their origin seems to be right in the center of the homography. are you spotting any error? I'm fairly sure my camera intrinsic are correct as its quite trivial to obtain on an Iphone.

drorza gravatar imagedrorza ( 2017-01-16 02:38:27 -0600 )edit

I played around with my code for a couple hours and figured a few things out, my object3dPoints were at a wrong order, i now init them as follows : pt(-s, +s), pt(+s, +s), pt(+s, -s), pt(-s, -s). In addition, I wasn't using aruco::drawAxis directly, rather, i copied the source for that function from the git (didnt want to install opencv_contrib). after some research i concluded that the original function expects color in BGR format, while my opencv3.2 app uses RGB, which means in the previous picture i posted, X and Z are actually reverse.

After applying those fixes, my axes now look correct. my cube, however, still reacts exactly as it did before (as seen in my video). so it is my understanding that my conversion to OpenGL is incorrect?

drorza gravatar imagedrorza ( 2017-01-16 10:01:47 -0600 )edit

I think so yes. I have what I think is the proper conversion for another project, but I can't be quite sure.

Where you have the 4x4 matrix, multiply the first three values by -1. So (0,0) is -1, everything else on the diagonal is 1.

Tetragramm gravatar imageTetragramm ( 2017-01-16 18:02:52 -0600 )edit

I'm afraid that didn't fix it. I've been trying multiple conversion methods i found online and they all either lead me to the same result seen in the video, or worse (cube not visible at all, cube completely distorted, etc..)

Perhaps i should better explain my workflow as it may have errors: -obtain rvec and tvec from solvepnp. -draw axis with rvec and tvec (seems correct). -create a view matrix from rvec and tvec. -convert said view matrix from opencv Mat to glm:mat4 (by flipping some axes and transposing the matrix). -use the outcome view matrix to calculate an MVP matrix. M being just an identity matrix (since i want cube at 0,0,0) and P being a projection matrix obtained by glm::perspective). -send MVP to vertex shader. and calculate position by gl_Position = MVP * vec4(vPos,1.0f);

drorza gravatar imagedrorza ( 2017-01-17 04:07:00 -0600 )edit

I've edited my question to show the current version of my code after few iterations. I am not longer sure if my issue can be described as the cube having an offset from the marker, feels more like the cube just simply does not have a consistent world position at all. I've uploaded another video for clarity : https://www.youtube.com/watch?v=NMMC83daDQs&feature=youtu.be (https://www.youtube.com/watch?v=NMMC8...)

drorza gravatar imagedrorza ( 2017-01-17 09:05:39 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-01-17 18:16:37 -0600

Tetragramm gravatar image

Ah, I dug up that part of the other project, and I told you something, several things, wrong. OpenGL always puts the camera at (0,0,0). So you have to reverse the coordinates. I'm putting this as an answer so I can put in a bunch of code.

To go from OpenGL to OpenCV, this is what I do. Note that there's a stop in the Viz module along the way. So there's some redundancies.

Mat ident;
ident.create(3, 3, pose.type());
setIdentity(ident, -1);
ident.at<float>(0, 0) = 1;
pose(Rect(0, 0, 3, 3)) = (ident * pose(Rect(0, 0, 3, 3)).t()).t();
Mat R = pose(Rect(0,0,3,3));
Mat tBuffer = pose(Rect(3, 0, 1, 3));
R = R.t();
tBuffer = (-R * tBuffer);
Rodrigues(R, rBuffer);

Therefore, this should be what you do.

Mat ident;
ident.create(3, 3, pose.type());
setIdentity(ident, -1);
ident.at<float>(0, 0) = 1;
Mat R;
Rodrigues(rvec, R);
R = R.t();
Mat tvecTemp = (-R * tvec);
R = (ident * R).t()).t();

Then you put R and tvecTemp into your 4x4 view matrix. I'm sorry I don't have very much experience with OpenGL or I'd try it myself to verify it works.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2017-01-15 08:03:51 -0600

Seen: 2,191 times

Last updated: Jan 17 '17