2014-12-28 12:53:23 -0600
| asked a question | Homography approximation Hi everybody, Given a homography H , a point p0 and J the jacobian of the function induced by H , I would like to compute the 2x3 matrix A corresponding to the affine transformation that approximates H around p0 . I know that the first order formula is : p' = H.p0 + J(p0).(p-p0)
but i don't know and i don't understand how to compute the matrix A from H , p0 and J .
Is there some one that can help me with that? Many thanks! |
2014-09-16 06:56:06 -0600
| received badge | ● Critic
(source)
|
2014-07-17 12:35:20 -0600
| received badge | ● Enlightened
(source)
|
2014-07-17 12:35:20 -0600
| received badge | ● Good Answer
(source)
|
2013-11-08 02:19:11 -0600
| received badge | ● Nice Answer
(source)
|
2013-10-28 18:09:45 -0600
| received badge | ● Teacher
(source)
|
2013-10-27 19:01:59 -0600
| answered a question | OpenCV + OpenGL: proper camera pose using solvePnP Hi axadiw, I had the same problem and all the samples i found on the net were crappy hacks! The 3 most important things to know are that : - solvePnP gives you the transfer matrix from the model's frame (ie the cube) to the camera's frame (it's called view matrix).
- The camera's frame are not the same in opencv and opengl. Y axis and Z axis are inverted.
- How matrixes are stored is not the same neither. Opengl matrixes are column major order whereas they are row major order in Opencv.
So to compute the view matrix (transfer matrix from the model's frame to the camera's frame) that will be used in OpenGl, you have to: - Use same coordinates to draw the cube in Opengl and to compute the camera's pose with solvePnP (markerObjectPoints)
build the view matrix like this: cv::Mat rvec, tvec;
cv::solvePnP(objectPoints, imagePoints, intrinsics, distortion, rvec, tvec, ...);
cv::Mat rotation, viewMatrix(4, 4, CV_64F);
cv::Rodrigues(rvec, rotation);
for(unsigned int row=0; row<3; ++row)
{
for(unsigned int col=0; col<3; ++col)
{
viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
}
viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0);
}
viewMatrix.at<double>(3, 3) = 1.0f;
Multiply the view matrix by the transfer matrix between OpenCV and OpenGL: cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
viewMatrix = cvToGl * viewMatrix;
Because OpenCV's matrixes are stored by row you have to transpose the matrix in order that OpenGL can read it by column: cv::Mat glViewMatrix = cv::Mat::zeros(4, 4, CV_64F);
cv::transpose(viewMatrix , glViewMatrix);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixd(&glViewMatrix.at<double>(0, 0));
And after that it should work fine ;) Moreover by watching your video, i can notice a shift between the cube and the marker, so i think you probably have calibration problems. Try with default values to see if it's better.. I hope it will be useful ;) |
2013-10-17 14:27:10 -0600
| received badge | ● Supporter
(source)
|
2013-09-27 06:54:45 -0600
| received badge | ● Student
(source)
|
2013-09-27 04:41:16 -0600
| asked a question | Wrong mouse coordinates Hi, The coordinates recovered from the mouse's callback are wrong. For example, with a window of 640x480 pixels, if i click on the bottom right corner the recovered coordinates are x:645 and y:507 instead of x:639 et y:479. I looked into the code (window_w32.cpp line 1484): pt.x = LOWORD( lParam );
pt.y = HIWORD( lParam );
GetClientRect( window->hwnd, &rect );
icvGetBitmapData( window, &size, 0, 0 );
window->on_mouse( event, pt.x*size.cx/MAX(rect.right - rect.left,1),
pt.y*size.cy/MAX(rect.bottom - rect.top,1), flags,
window->on_mouse_param );
And i'm wondering why do not simply do that: POINT point;
GetCursorPos(&point);
ScreenToClient(window->hwnd, &point);
window->on_mouse( event, point.x, point.y, flags, window->on_mouse_param );
or: pt.x = LOWORD( lParam );
pt.y = HIWORD( lParam );
window->on_mouse( event, pt.x, pt.y, flags, window->on_mouse_param );
Is there something i missed?
BR. |
2013-09-09 15:52:39 -0600
| received badge | ● Editor
(source)
|
2013-09-09 15:50:30 -0600
| asked a question | Camera location computation Hi everybody, I have a question about the way to compute the camera location. Indeed, SolvePnP gives us the rotation and translation vectors of the object in the camera space, with cv::Rodrigues we can compute the rotation matrix and build the matrix M = [ R | T ] . So to have the camera location in the object space, I thought I had to compute the inverse of the matrix M, that is M' = [ M^t | -M^t * T ] . I did it with cv::invert but it clearly doesn't work!!
The only formula that I have seen in samples and that works is this one M' = [ M^t | -T ] .
Can somebody explain me why? Many thanks. |