# Revision history [back]

### Obtain Camera Pose and real world position using SolvePnP C++

I am trying to measure the pose of a camera and I have done the following.

1. Mark world 3-D(Assuming z=0, since it is flat) points on corners of a square on a flat surface and assume a world coordinate system.(in cms)

Have taken the top left corner of the square as my origin and given the world points in the following order(x,y)or(col,row): (0,0),(-12.8,0),(-12.8,12.8),(0,12.8) - in cms

1. Detect those points in my image.(in pixels) The image points and world points are in the same order.

2. I have calibrated my camera for intrinsic matrix and distortion coefficients.

3. I use SolvePnP function to get rvec and tvec.

4. I use Rodrigues function to get rotation matrix.

5. To check if rvec and tvec is correct, I project back the 3-D points(z=0) using ProjectPoints into the image plane and I get the points correctly on my image with an error of 3 pixels on X- axis.

6. Now I go ahead and calculate my camera position in the world frame using the formula:

cam_worl_pos = - inverse(R) * tvec. (This formula I have verified in many blogs and also this makes sense)

1. But my cam_worl_pos x,y, and z in cms do not seem to be correct.

My doubt is, if I am able to project back the 3-D world point to image plane using rvec and tvec with (3 pixel error on X-axis and almost no error on Y axis, hope it is not too bad), then why am I not getting the camera position in world frame right.

Also, I have a doubt on SolvPnP rvec and tvec solution, they might be one of the multiple solutions, but not the one which I want.

How do I get the right rvec and tvec from SolvPnp or any other suggestions to get rvec and tvec would also be helpful.

EDITS :

Image Size - 720(row) * 1280(col)

Calibration pattern seen by camera

The link has the picture of calibration pattern used.

New Edits

World coordinate system following Right Hand Rule and the corresponding points detected in the image

This link has the picture of the world coordinate coordinate system and also the corresponding points detected in the image plane

The left square is my world coordinate system which is a square of sides 12.8cm, the top left corner is the world origin (0,0). The red points are the 3-D world points detected in the image.

The image seen is after radial distortion correction of a fish eye lens camera.

camera parameters

cameraMatrix_Front=[908.65   0     642.88
0     909.28   364.95
0        0        1]

distCoeffs_Front=[-0.4589, 0.09462, -1.46*10^-3, 1.23*10^-3]


OpenCV C++ code:

vector<Point3f> front_object_pts;
Mat rvec_front;
Mat tvec_front;
Mat rotation_front;
Mat world_position_front_cam;

//Fill front object points(x-y-z order in cms)
//It is square of side 12.8cms on Z=0 plane
front_object_pts.push_back(Point3f(0, 0, 0));
front_object_pts.push_back(Point3f(-12.8, 0, 0));
front_object_pts.push_back(Point3f(-12.8,12.8,0));
front_object_pts.push_back(Point3f(0, 12.8, 0));

//Corresponding Image points detected in the same order as object points
front_image_pts.push_back(points_front[0]);
front_image_pts.push_back(points_front[1]);
front_image_pts.push_back(points_front[2]);
front_image_pts.push_back(points_front[3]);

//Detected points in image matching the 3-D points in the same order
//(467,368)
//(512,369)
//(456,417)
//(391,416)

//Get rvec and tvec using Solve PnP
solvePnP(front_object_pts, front_image_pts, cameraMatrix_Front,
Mat(4,1,CV_64FC1,Scalar(0)), rvec_front, tvec_front, false, CV_ITERATIVE);

//Output of SolvePnP
//tvec=[-26.951,0.6041,134.72]  (3 x 1 matrix)
//rvec=[-1.0053,0.6691,0.3752]  (3 x 1 matrix)

//Check rvec and tvec is correct or not by projecting the 3-D object points to image
vector<Point2f>check_front_image_pts
projectPoints(front_object_pts, rvec_front, tvec_front,
cameraMatrix_Front, distCoeffs_Front, check_front_image_pts);

//Here to note that I have made **distCoefficents**,
//a 0 vector since my   image points are detected after radial distortion is removed

//Get rotation matrix
Rodrigues(rvec_front, rotation_front);

//Get rotation matrix inverse
Mat rotation_inverse;
transpose(rotation_front, rotation_inverse);

//Get camera position in world cordinates
world_position_front_cam = -rotation_inverse * tvec_front;


//Actual location of camera(Measured manualy approximate)

X=-47cm

Y=18cm

Z=25cm

//Obtained location

X=-110cm

Y=71cm

Z=40cm

Also I have used the formula

  d=sqrt(tx²+ty²+tz²)


to calculate the distance between world origin and the camera. But I get the wrong results.