# What are tvecs and rvecs?

When obtaining the camera calibration parameters using calibrateCamera I get back two C++ vectors of matrices tvecs and rvecs. It seems that these are the extrinsic parameters of the camera, i.e. the parameters that describe the location and rotation of the camera. If I'm understanding it correctly, the rotation matrix rotates the scene so that the optical axis of the camera is aligned parallel to the Z-axis and the translation vector then moves the scene by the negative camera position vector, so that it is located at the origin. Am I understanding this correctly?

Bonus question: Is it possible to reproject (u, v) points from the camera image back to world coordinates? It seems to be impossible in the general case because cameraMatrix performs a central projection so that the depth information is lost.

edit retag close merge delete

Up :) I'd like to understand too. I get these via Arucos however I don't see what it refers to. To my understanding :

• tvecs is the location vector (stored in a 3X1 matrix), the coordinates x,y,z of the chessboard or aruco in camera's coordinates system, which is z along optical axis, x to the right and y to the bottom)
• rvecs is the rotation vector (same as above) of the aruco under camera's coordinates system (same as above).

However it seems that I'm missing something there. When I print the values of rvecs with :

rvecs[i].at<double>(0,k)


Well it seems that values are correlated...

( 2016-03-03 09:49:33 -0500 )edit

Yes, I think you are correct. Also, no: you cannot reproject from camera to world space exactly because z is lost. Given the plane z coordinate, then you can map from camera to world space.

( 2018-08-28 03:08:13 -0500 )edit

Sort by » oldest newest most voted

The tvecs and rvecs seem to be intermediate values used to calculate the camera matrix and distortion coefficients. There's one tvec and rvec returned for each corner of your chessboard. I'm not sure what use they are after the calibration is done.

As for your bonus question, the answer is no, but it's possible to convert the u, v into azimuth (90-(u-cx)/width*fovx) and inclination (90-(v-cy)/height*fovy) angles. If you have some way to determine the distance from the camera to the point, then you have the real-world spherical coordinate with the camera as the origin.

From above, width, height are the screen dimensions in pixels, and fovx, fovy are the camera field of view angles (first two values returned from cv2.calibrationMatrixValues()).

more

Official site

GitHub

Wiki

Documentation