# ArUco orientation using the function aruco.estimatePoseSingleMarkers()

Hi everyone!

I'm trying to program a python app that determine the position and orientation of an aruco marker. I calibrated the camera and everything and I used aruco.estimatePoseSingleMarkers that returns the translation and rotation vectors. The translation vector works fine but I don't understand how the rotation vector works. I took some picture to illustrate my problem with the "roll rotation":

Here the rotation vector is approximately [in degree]: [180 0 0]

Here the rotation vector is approximately [in degree]: [123 -126 0]

And here the rotation vector is approximately [in degree]: [0 -180 0]

And I don't see the logic in these angles. I've tried the other two rotations (pitch and yaw) and there appear also "random". So if you have an explication I would be very happy :)

edit retag close merge delete

what does "is approximately [in degree]" mean ? you get an (exact) vector of 3 angles in rad.

( 2019-07-09 10:37:17 -0500 )edit
1

Yes but I rounded and put them in degree to have a better idea of there meaning

( 2019-07-09 10:40:56 -0500 )edit

Sort by » oldest newest most voted

My first advice is to look at a course or a book on rigid body transformation, homogeneous transformation topics. These topics are well-covered in a robotics or computer graphics course, for instance:

You have to understand what rvec and tvec are. First the camera model used in computer vision is described here. Additional information on pose estimation are described here.

rvec is a Rodrigues rotation vector, or axis-angle representation.

Rotation direction is described by a unit vector and the "amount" of rotation by the length of the vector. This representation avoids issues you can have with Euler angles, i.e. Gimbal lock. Quaternion is another representation of rotation that don't have Gimbal lock issue.

Most important, they are not Euler angles.

Now, what rvec and tvec represent?

They allow transforming a 3D point expressed in one coordinate system to another one. Here, you transform 3D points expressed in the tag frame into the camera frame:

So, for the first case you should have (here I use the tag X-axis as an example):

For trivial case, you can recover the rotation matrix by looking at the values that allow transforming the X-axis, Y-axis and Z-axis of the tag into the camera frame.

In my opinion, it is very rare to need to use the Euler angles and I don't like them. Why?

First, because they are 12 different representations/conventions of Euler angles. So first you have to agree on which Euler convention you are using when you have to deal with other people / code.

Then, Gimbal lock.

You only need Euler angles when you have to be able to interpret the rotation. So only for printing or as an user input (e.g. to rotate a CAD model in Blender).

Rest of the time, computation should be / are done using the rotation matrix or quaternion representations.

more

Thanks a lot for you answer, now it's a lot clearer! :) I was struggling to find information about computer vision. Yeah you're right they are really tricky those Euler angles and there is no consensus about their representation. I remember that our teacher almost forbade us to use them. So I will follow your advice and work with the rotation matrix using cv2.Rodrigues(rvec). So if I understand well, to found the coordinate of the tag, I can simply do the following : (x,y,z,1)_tag = inv(transform_matrix) (x,y,z,1)_camera? Because basically I'm working on a "vision system" for a robot (ABB MR6400). The camera would be on the robot'arm and knowing it's tool coordinate, it would know the position of the tag.

( 2019-07-10 04:41:15 -0500 )edit

For inverse transformation, have a look at this, p72. You don't need to compute the matrix inverse.

Not sure to understand what you need. In (x,y,z,1)_cam = [R | t] (x,y,z,1)_tag, t is translation of the tag frame wrt the camera frame in the camera coordinates system. That means that if you have tx = 10 cm, the tag center is at 10 cm in camera X-axis.

If you inverse the transformation, it is now the camera frame wrt the tag frame in the tag coordinates system.

( 2019-07-10 05:17:26 -0500 )edit

Yes your right, my bad, I don't need this transformation. But I'm still struggling with the orientation of the aruco... I converted rvec into quaternions and visualized them but they don't represent the way the aruco is oriented.

( 2019-07-11 04:08:03 -0500 )edit

The axes drawn is the way to be sure that the pose is correctly estimated (red is X-axis, green is Y-axis and blue the Z-axis). Check that the axes correspond to how you have defined the object 3D points.

( 2019-07-12 05:23:48 -0500 )edit

Official site

GitHub

Wiki

Documentation