OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 27 Dec 2013 07:36:49 -0600Using solvePnP to work out the pose of my camera in terms of my model's coordinateshttp://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/Right now, I'm working on adding more sophisticated graphics to the my markerless augmented reality app built using OpenCV. I'm using a lightweight android Graphics engine called JPCT to load and render a 3D model of the OpenCV logo I created. I use OpenCV to detect the four corners of a book.
The graphics engine abstracts the problem as a 3D world with objects placed in it and a camera that can also be placed in the world. The issue I have at the moment is figuring out where the 'camera' should be placed relative to the origin. The origin of the world coordinates (0,0,0) is the centre of the book. The model's coordinates are the same as the world coordinates.
I know that the position of where the camera is can be solved using **solvePnP**.
Indeed, **solvePnP** takes the 2D camera coordinates of the book's corners and the 3D world coordinates of the book's corners (which are predefined) and returns
an output rotation matrix
an output translation vector
**NOW...** this is where I become unsure
I believe that the rotation matrix and translation vector can be used to figure out the position and orientation of where the camera is in the world's coordinate system. Is this correct?
If the camera's starting position is at the origin facing down the zaxis THEN we apply the translation vector to the camera's position to move it to its correct position. We then apply the rotation matrix to the camera's orientation to change its direction of sight.
Have I got this right? Let me know if I should provide more info.Fri, 13 Dec 2013 06:02:01 -0600http://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/Comment by tenta4 for <p>Right now, I'm working on adding more sophisticated graphics to the my markerless augmented reality app built using OpenCV. I'm using a lightweight android Graphics engine called JPCT to load and render a 3D model of the OpenCV logo I created. I use OpenCV to detect the four corners of a book.</p>
<p>The graphics engine abstracts the problem as a 3D world with objects placed in it and a camera that can also be placed in the world. The issue I have at the moment is figuring out where the 'camera' should be placed relative to the origin. The origin of the world coordinates (0,0,0) is the centre of the book. The model's coordinates are the same as the world coordinates.</p>
<p>I know that the position of where the camera is can be solved using <strong>solvePnP</strong>.</p>
<p>Indeed, <strong>solvePnP</strong> takes the 2D camera coordinates of the book's corners and the 3D world coordinates of the book's corners (which are predefined) and returns
an output rotation matrix
an output translation vector
<strong>NOW...</strong> this is where I become unsure</p>
<p>I believe that the rotation matrix and translation vector can be used to figure out the position and orientation of where the camera is in the world's coordinate system. Is this correct?</p>
<p>If the camera's starting position is at the origin facing down the zaxis THEN we apply the translation vector to the camera's position to move it to its correct position. We then apply the rotation matrix to the camera's orientation to change its direction of sight.</p>
<p>Have I got this right? Let me know if I should provide more info.</p>
http://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/?comment=25640#post-id-25640solvePnP returns book's translation and rotation in camera coordinate system. You can use it for model rendering. Look here to know how
http://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/?answer=23123#post-id-23123Wed, 18 Dec 2013 09:07:40 -0600http://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/?comment=25640#post-id-25640Answer by AlexanderShishkov for <p>Right now, I'm working on adding more sophisticated graphics to the my markerless augmented reality app built using OpenCV. I'm using a lightweight android Graphics engine called JPCT to load and render a 3D model of the OpenCV logo I created. I use OpenCV to detect the four corners of a book.</p>
<p>The graphics engine abstracts the problem as a 3D world with objects placed in it and a camera that can also be placed in the world. The issue I have at the moment is figuring out where the 'camera' should be placed relative to the origin. The origin of the world coordinates (0,0,0) is the centre of the book. The model's coordinates are the same as the world coordinates.</p>
<p>I know that the position of where the camera is can be solved using <strong>solvePnP</strong>.</p>
<p>Indeed, <strong>solvePnP</strong> takes the 2D camera coordinates of the book's corners and the 3D world coordinates of the book's corners (which are predefined) and returns
an output rotation matrix
an output translation vector
<strong>NOW...</strong> this is where I become unsure</p>
<p>I believe that the rotation matrix and translation vector can be used to figure out the position and orientation of where the camera is in the world's coordinate system. Is this correct?</p>
<p>If the camera's starting position is at the origin facing down the zaxis THEN we apply the translation vector to the camera's position to move it to its correct position. We then apply the rotation matrix to the camera's orientation to change its direction of sight.</p>
<p>Have I got this right? Let me know if I should provide more info.</p>
http://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/?answer=25917#post-id-25917SolvePnP returns rotation and translation to transform object from initial 3d coordinates to camera coordinate system. In this system camera is located in (0,0,0). And it oriented along Z axis. So if you know object position in your global system you can find new camera coordinates in this system too. Also you can imagine that inverted found transformation describes motion of camera. So if you know camera position in initial time moment you can apply inverted transformation to got new camera position.Fri, 27 Dec 2013 07:36:49 -0600http://answers.opencv.org/question/25411/using-solvepnp-to-work-out-the-pose-of-my-camera-in-terms-of-my-models-coordinates/?answer=25917#post-id-25917