# Extract camera position

Hi

Given a set of points in world coordinates lying around a field, how do i extract the POSITION in XYZ of the camera ?

Help thanks

EDIT: double cv::calibrateCamera gives me 90 error and my camera positions are wrong.

Position of camera 1 is [87.32213358065832; 31.93220642490884; 13.05887540297087]

I input the into the function following the tutorial http://docs.opencv.org/2.4/doc/tutori...

calibrateCamera(World_Coords, Image_Coords, Size(1920,1080), cameraMatrix, distCoeffs, rvecs, tvecs, CV_CALIB_FIX_K4 | CV_CALIB_FIX_K5);

The points in world_coords and Image_coords are arranged such that they correspond to each other.

Then i use rodrigues() to extract the 3x3 rot matrix then the camera position = - rot_matrix.transpose() * tvec.

But I am not getting the right camera position. Can i send the code and data to anyone ?

Camera positions

87.3212582849046; 31.93218648966307; 13.06565020386278]

[88.33600161820506; 36.45175643361132; 13.09339437945696]

[52.67469116679835; 32.05365845570439; 12.67307503062286]

[53.09438819350136; 36.17409639334836; 12.70318338140262]

[9.268389455871184; 35.52733695765274; 14.36430399165124]

[9.21543336317054; 38.71223919680707; 14.1736490348868]

EDIT: i changed to CV_CALIB_FIX_ASPECT_RATIO and the error has dropped to 19. how do i make it drop to ~1. help

edit retag close merge delete

3

If you have for each point the 3D coordinate in the world frame and the corresponding 2D coordinate in the image frame and you have the intrinsic parameters of the camera, you can use solvePnP to estimate the camera pose.

( 2016-01-15 03:29:55 -0500 )edit

Sort by ยป oldest newest most voted

Calm down. We can help.

The first question is how are you defining your world points? I will assume you can extract them from your imagery, and the labels you have in those images are those points. Does that top image accurately represent your coordinate system? That is, the bottom left corner of the field is (0,0,0) and the top left is (Positive X, Positive Y, 0)?

Secondly, what you are doing is two things in one. That function calculates camera parameters, distortion, rotation, and position. It seems what you need most is position and the camera parameters.

So. Use the function initCameraMatrix2D to find the camera matrix. Use all of the data you have, whether the camera is moving or rotating or not. Then save it, because you're going to re-use this over and over. I actually wrote this to a file to read in each time I use the camera.

Now. Use this, and each static scene as input to the solvePnP function. By static scene, I mean any time when the camera doesn't move, rotate, or zoom. If the camera can move, it probably means one frame. You now have output for the frame, that is rotation and translation vectors. The way you did it before seems to be correct, but to be sure, here is the code I use.

Rodrigues(rotation, R);
R = R.t();
t = -R * translation;


t now contains the position of the camera for that frame.

Now, to improve accuracy, there is one important step to take. Using the calibrateCamera function we are going to find the camera matrix and the distortion matrix. Split your worldPoints and cameraPoints vectors up by frame. vector<vector<point3f>> and vector<vector<point2f>> One outer vector, holding one vector per frame, which holds the points for that frame. This is important.

Now, make sure you pass the camera matrix you created earlier to the function and use the flag CV_CALIB_USE_INTRINSIC_GUESS. You have a guess, and it should help the accuracy.

Check the returned vector<mat> that is the rotation and translation. They may be good enough. If not, use the new camera matrix and distortion matrix in SolvePnP again, and use those.

If you post the world points named as the images you have above, I will be happy to double check the results.

more

Official site

GitHub

Wiki

Documentation