First time here? Check out the FAQ!

Ask Your Question
0

What are tvecs and rvecs?

asked Jan 20 '16

Fahrradweg gravatar image

updated Jan 21 '16

When obtaining the camera calibration parameters using calibrateCamera I get back two C++ vectors of matrices tvecs and rvecs. It seems that these are the extrinsic parameters of the camera, i.e. the parameters that describe the location and rotation of the camera. If I'm understanding it correctly, the rotation matrix rotates the scene so that the optical axis of the camera is aligned parallel to the Z-axis and the translation vector then moves the scene by the negative camera position vector, so that it is located at the origin. Am I understanding this correctly?

Bonus question: Is it possible to reproject (u, v) points from the camera image back to world coordinates? It seems to be impossible in the general case because cameraMatrix performs a central projection so that the depth information is lost.

Preview: (hide)

Comments

Up :) I'd like to understand too. I get these via Arucos however I don't see what it refers to. To my understanding :

  • tvecs is the location vector (stored in a 3X1 matrix), the coordinates x,y,z of the chessboard or aruco in camera's coordinates system, which is z along optical axis, x to the right and y to the bottom)
    • rvecs is the rotation vector (same as above) of the aruco under camera's coordinates system (same as above).

However it seems that I'm missing something there. When I print the values of rvecs with :

rvecs[i].at<double>(0,k)

Well it seems that values are correlated...

Maltergate gravatar imageMaltergate (Mar 3 '16)edit

Yes, I think you are correct. Also, no: you cannot reproject from camera to world space exactly because z is lost. Given the plane z coordinate, then you can map from camera to world space.

a4re gravatar imagea4re (Aug 28 '18)edit

1 answer

Sort by » oldest newest most voted
0

answered Nov 25 '19

The tvecs and rvecs seem to be intermediate values used to calculate the camera matrix and distortion coefficients. There's one tvec and rvec returned for each corner of your chessboard. I'm not sure what use they are after the calibration is done.

As for your bonus question, the answer is no, but it's possible to convert the u, v into azimuth (90-(u-cx)/width*fovx) and inclination (90-(v-cy)/height*fovy) angles. If you have some way to determine the distance from the camera to the point, then you have the real-world spherical coordinate with the camera as the origin.

From above, width, height are the screen dimensions in pixels, and fovx, fovy are the camera field of view angles (first two values returned from cv2.calibrationMatrixValues()).

Preview: (hide)

Question Tools

1 follower

Stats

Asked: Jan 20 '16

Seen: 8,229 times

Last updated: Nov 25 '19