# solvePnp object - to - camera pose.

I have read enough about this to know that it is fairly straightforward, but I can't find an example of how to actually do it.

I use solvePnp, and get rvec and tvec matrices back. The rotation and translation, respectively, as a 3x3 matrix and a 3 x 1.

As I understand it, this is the OBJECT transformation matrix, using the camera sensor as the zero point and giving the coordinates of one of the points.

My question is, how exactly do i get the camera pose from this information? I believe i need to invert the matrix?

edit retag close merge delete

Sort by ยป oldest newest most voted

The function solvePnp (or solvePnPRansac) consider that the 3D points given are in absolute world coordinates, thus it will return a rotation and translation matrix of the extrinsic matrix of the camera. That is, a matrix that will convert 3D world coordinates to 3D coordinates relative to the camera centre. If you compute the inverse of said matrix, you will have the camera transform matrix, which will state the camera rotation and translation in relation to the "world".

Note that the rotation is given in Euler angles, so you will need use cv::Rodrigues to convert it to a 3x3 rotation matrix. The extrinsic matrix is then a 4x4 matrix in the form

R00 R01 R02 T0
R10 R11 R12 T1
R20 R21 R22 T2
0   0   0   1


you can just use cv::Mat::inv() to compute the inverse.

more

Perfect. thank you for the detailed explanation.

( 2015-06-17 04:16:20 -0600 )edit

Hi again, If you have time could you cast your eye over my new question please? Still fighting with solvePnp!

( 2015-06-19 05:44:54 -0600 )edit

Official site

GitHub

Wiki

Documentation