This forum is disabled, please visit https://forum.opencv.org

2014-12-28 12:53:23 -0500 | asked a question | Homography approximation Hi everybody, Given a homography I know that the first order formula is : but i don't know and i don't understand how to compute the matrix Many thanks! |

2014-09-16 06:56:06 -0500 | received badge | ● Critic (source) |

2014-07-17 12:35:20 -0500 | received badge | ● Enlightened (source) |

2014-07-17 12:35:20 -0500 | received badge | ● Good Answer (source) |

2013-11-08 02:19:11 -0500 | received badge | ● Nice Answer (source) |

2013-10-28 18:09:45 -0500 | received badge | ● Teacher (source) |

2013-10-27 19:01:59 -0500 | answered a question | OpenCV + OpenGL: proper camera pose using solvePnP Hi axadiw, I had the same problem and all the samples i found on the net were crappy hacks! The 3 most important things to know are that : - solvePnP gives you the transfer matrix from the model's frame (ie the cube) to the camera's frame (it's called view matrix).
- The camera's frame are not the same in opencv and opengl. Y axis and Z axis are inverted.
- How matrixes are stored is not the same neither. Opengl matrixes are column major order whereas they are row major order in Opencv.
So to compute the view matrix (transfer matrix from the model's frame to the camera's frame) that will be used in OpenGl, you have to: - Use same coordinates to draw the cube in Opengl and to compute the camera's pose with solvePnP (markerObjectPoints)
build the view matrix like this: `cv::Mat rvec, tvec; cv::solvePnP(objectPoints, imagePoints, intrinsics, distortion, rvec, tvec, ...); cv::Mat rotation, viewMatrix(4, 4, CV_64F); cv::Rodrigues(rvec, rotation); for(unsigned int row=0; row<3; ++row) { for(unsigned int col=0; col<3; ++col) { viewMatrix.at<double>(row, col) = rotation.at<double>(row, col); } viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0); } viewMatrix.at<double>(3, 3) = 1.0f;` Multiply the view matrix by the transfer matrix between OpenCV and OpenGL: `cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F); cvToGl.at<double>(0, 0) = 1.0f; cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis cvToGl.at<double>(3, 3) = 1.0f; viewMatrix = cvToGl * viewMatrix;` Because OpenCV's matrixes are stored by row you have to transpose the matrix in order that OpenGL can read it by column: `cv::Mat glViewMatrix = cv::Mat::zeros(4, 4, CV_64F); cv::transpose(viewMatrix , glViewMatrix); glMatrixMode(GL_MODELVIEW); glLoadMatrixd(&glViewMatrix.at<double>(0, 0));` And after that it should work fine ;) Moreover by watching your video, i can notice a shift between the cube and the marker, so i think you probably have calibration problems. Try with default values to see if it's better..
I hope it will be useful ;) |

2013-10-17 14:27:10 -0500 | received badge | ● Supporter (source) |

2013-09-27 06:54:45 -0500 | received badge | ● Student (source) |

2013-09-27 04:41:16 -0500 | asked a question | Wrong mouse coordinates Hi, The coordinates recovered from the mouse's callback are wrong. For example, with a window of 640x480 pixels, if i click on the bottom right corner the recovered coordinates are x:645 and y:507 instead of x:639 et y:479. I looked into the code (window_w32.cpp line 1484): And i'm wondering why do not simply do that: or: Is there something i missed? BR. |

2013-09-09 15:52:39 -0500 | received badge | ● Editor (source) |

2013-09-09 15:50:30 -0500 | asked a question | Camera location computation Hi everybody, I have a question about the way to compute the camera location. Indeed, SolvePnP gives us the rotation and translation vectors of the object in the camera space, with cv::Rodrigues we can compute the rotation matrix and build the matrix So to have the camera location in the object space, I thought I had to compute the inverse of the matrix M, that is Many thanks. |

Copyright OpenCV foundation, 2012-2018. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.