Ask Your Question

drorza's profile - activity

2020-04-15 11:25:43 -0600 received badge  Popular Question (source)
2017-01-26 09:45:24 -0600 received badge  Enthusiast
2017-01-17 09:05:39 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

I've edited my question to show the current version of my code after few iterations. I am not longer sure if my issue can be described as the cube having an offset from the marker, feels more like the cube just simply does not have a consistent world position at all. I've uploaded another video for clarity : https://www.youtube.com/watch?v=NMMC83daDQs&feature=youtu.be (https://www.youtube.com/watch?v=NMMC8...)

2017-01-17 04:07:00 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

I'm afraid that didn't fix it. I've been trying multiple conversion methods i found online and they all either lead me to the same result seen in the video, or worse (cube not visible at all, cube completely distorted, etc..)

Perhaps i should better explain my workflow as it may have errors: -obtain rvec and tvec from solvepnp. -draw axis with rvec and tvec (seems correct). -create a view matrix from rvec and tvec. -convert said view matrix from opencv Mat to glm:mat4 (by flipping some axes and transposing the matrix). -use the outcome view matrix to calculate an MVP matrix. M being just an identity matrix (since i want cube at 0,0,0) and P being a projection matrix obtained by glm::perspective). -send MVP to vertex shader. and calculate position by gl_Position = MVP * vec4(vPos,1.0f);

2017-01-16 10:01:47 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

I played around with my code for a couple hours and figured a few things out, my object3dPoints were at a wrong order, i now init them as follows : pt(-s, +s), pt(+s, +s), pt(+s, -s), pt(-s, -s). In addition, I wasn't using aruco::drawAxis directly, rather, i copied the source for that function from the git (didnt want to install opencv_contrib). after some research i concluded that the original function expects color in BGR format, while my opencv3.2 app uses RGB, which means in the previous picture i posted, X and Z are actually reverse.

After applying those fixes, my axes now look correct. my cube, however, still reacts exactly as it did before (as seen in my video). so it is my understanding that my conversion to OpenGL is incorrect?

2017-01-16 02:38:27 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

Good idea, wasn't aware of this function :) The axes don't seem to be correct (http://imgur.com/KYwPK1B). They aren't pointing at the expected direction (as seem here for example: http://docs.opencv.org/trunk/singlemarkersaxis.png (http://docs.opencv.org/trunk/singlema...)), though their origin seems to be right in the center of the homography. are you spotting any error? I'm fairly sure my camera intrinsic are correct as its quite trivial to obtain on an Iphone.

2017-01-15 09:31:51 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

added a video to help clarify what the issue looks like : https://www.youtube.com/watch?v=HhP5Qr3YyGI&feature=youtu.be (https://www.youtube.com/watch?v=HhP5Q...)

2017-01-15 09:17:56 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

Not sure what you mean by using a different scale in different parts of the app. I tried playing around with the object3dPoints scale to fit the marker but it only seemed to affect the scale of my cube, but not its origin. is there a different place in my code where i wrongly refer to scale?

2017-01-15 08:26:28 -0600 commented question Using solvePnP camera pose - object is offset from detected marker

http://i.imgur.com/qjsAYaU.jpg http://i.imgur.com/SCxtege.jpg

2017-01-15 08:06:15 -0600 asked a question Using solvePnP camera pose - object is offset from detected marker

I have a problem in my iOS application where i attempt to obtain a view matrix using solvePnP and render a 3d cube using modern OpenGL. While my code attempts to render a 3d cube directly on top of the detected marker, it seems to render with a certain offset :

EDIT: it appears i am unable to attach images to my post due to my karma, links are in the first comment

(on the bottom right of the image you can see an opencv render of the homography around the tracker marker. and the 3 axis drawn using rvec and tvec. the rest of the screen is an opengl render of the camera input frame and a 3d cube at location (0,0,0).

the cube rotates and translates correctly whenever i move the real world marker, however it does not "sit" on top of the marker and seems to not have a constant world position. (see video in comments)

these are what i believe to be the relevant parts of the code where the error could come from :

Extracting view matrix from homography :

AVCaptureDevice *deviceInput = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *format = deviceInput.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);

static const float cx = float(dim.width) / 2.0;
static const float cy = float(dim.height) / 2.0;

static const float HFOV = format.videoFieldOfView;
static const float VFOV = ((HFOV)/cx)*cy;

static const float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
static const float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));


static const Mat camIntrinsic = (Mat_<double>(3,3) <<
                    fx, 0, cx,
                    0,  fy, cy,
                    0,  0, 1);

static const float objHalfSize = 0.5f;
Mat objPoints;
objPoints.create(4, 1, CV_32FC3);

objPoints.ptr< Vec3f >(0)[0] = Vec3f(-objHalfSize, +objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[1] = Vec3f(+objHalfSize, +objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[2] = Vec3f(+objHalfSize, -objHalfSize, 0);
objPoints.ptr< Vec3f >(0)[3] = Vec3f(-objHalfSize, -objHalfSize, 0);

cv::Mat raux,taux;
cv::Mat Rvec, Tvec;
cv::Mat distCoeffs = Mat::zeros(5, 1, CV_64F);
cv::solvePnP(objPoints, mNewImageBounds, camIntrinsic, distCoeffs,raux,taux);
raux.convertTo(Rvec,CV_64F);
taux.convertTo(Tvec ,CV_64F);

DrawingUtility::drawAxis(image, camIntrinsic,Rvec, Tvec, 0.5f); //debug draw axes

cv::Mat rotation;
cv::Rodrigues(Rvec, rotation);

//compose a view matrix from camera extrinsic rotation and translation
cv::Mat viewMatrix = cv::Mat::zeros(4, 4, CV_64FC1);
for(unsigned int row=0; row<3; ++row)
{
    for(unsigned int col=0; col<3; ++col)
    {
        viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
    }

    viewMatrix.at<double>(row, 3) = Tvec.at<double>(row, 0);
}

viewMatrix.at<double>(3, 3) = 1.0f;

cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64FC1);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
viewMatrix = cvToGl * viewMatrix;

glm::mat4 V; //OpenGL view matrix
glm::mat4 P; //OpenGL ...
(more)