Ask Your Question
0

Using solvePnP to work out the pose of my camera in terms of my model's coordinates

asked 2013-12-13 06:02:01 -0600

ozzyoli gravatar image

Right now, I'm working on adding more sophisticated graphics to the my markerless augmented reality app built using OpenCV. I'm using a lightweight android Graphics engine called JPCT to load and render a 3D model of the OpenCV logo I created. I use OpenCV to detect the four corners of a book.

The graphics engine abstracts the problem as a 3D world with objects placed in it and a camera that can also be placed in the world. The issue I have at the moment is figuring out where the 'camera' should be placed relative to the origin. The origin of the world coordinates (0,0,0) is the centre of the book. The model's coordinates are the same as the world coordinates.

I know that the position of where the camera is can be solved using solvePnP.

Indeed, solvePnP takes the 2D camera coordinates of the book's corners and the 3D world coordinates of the book's corners (which are predefined) and returns an output rotation matrix an output translation vector NOW... this is where I become unsure

I believe that the rotation matrix and translation vector can be used to figure out the position and orientation of where the camera is in the world's coordinate system. Is this correct?

If the camera's starting position is at the origin facing down the zaxis THEN we apply the translation vector to the camera's position to move it to its correct position. We then apply the rotation matrix to the camera's orientation to change its direction of sight.

Have I got this right? Let me know if I should provide more info.

edit retag flag offensive close merge delete

Comments

solvePnP returns book's translation and rotation in camera coordinate system. You can use it for model rendering. Look here to know how

http://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/?answer=23123#post-id-23123

tenta4 gravatar imagetenta4 ( 2013-12-18 09:07:40 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-12-27 07:36:49 -0600

AlexanderShishkov gravatar image

SolvePnP returns rotation and translation to transform object from initial 3d coordinates to camera coordinate system. In this system camera is located in (0,0,0). And it oriented along Z axis. So if you know object position in your global system you can find new camera coordinates in this system too. Also you can imagine that inverted found transformation describes motion of camera. So if you know camera position in initial time moment you can apply inverted transformation to got new camera position.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-12-13 06:02:01 -0600

Seen: 4,769 times

Last updated: Dec 27 '13