Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Calm down. We can help.

The first question is how are you defining your world points? I will assume you can extract them from your imagery, and the labels you have in those images are those points. Does that top image accurately represent your coordinate system? That is, the bottom left corner of the field is (0,0,0) and the top left is (Positive X, Positive Y, 0)?

Secondly, what you are doing is two things in one. That function calculates camera parameters, distortion, rotation, and position. It seems what you need most is position and the camera parameters.

So. Use the function initCameraMatrix2D to find the camera matrix. Use all of the data you have, whether the camera is moving or rotating or not. Then save it, because you're going to re-use this over and over. I actually wrote this to a file to read in each time I use the camera.

Now. Use this, and each static scene as input to the solvePnP function. By static scene, I mean any time when the camera doesn't move, rotate, or zoom. If the camera can move, it probably means one frame. You now have output for the frame, that is rotation and translation vectors. The way you did it before seems to be correct, but to be sure, here is the code I use.

Rodrigues(rotation, R);
R = R.t();
t = -R * translation;

t now contains the position of the camera for that frame.

Now, to improve accuracy, there is one important step to take. Using the calibrateCamera function we are going to find the camera matrix and the distortion matrix. Split your worldPoints and cameraPoints vectors up by frame. vector<vector<point3f>> and vector<vector<point2f>> One outer vector, holding one vector per frame, which holds the points for that frame. This is important.

Now, make sure you pass the camera matrix you created earlier to the function and use the flag CV_CALIB_USE_INTRINSIC_GUESS. You have a guess, and it should help the accuracy.

Check the returned vector<mat> that is the rotation and translation. They may be good enough. If not, use the new camera matrix and distortion matrix in SolvePnP again, and use those.

If you post the world points named as the images you have above, I will be happy to double check the results.