Need explaination about rvecs returned from SolvePnP
I am using ArUco for pose estimation and i want to get the global world coordinates of my camera using say a single detected aruco marker. For this, I need to know the rotation of camera wrt marker along y axies (upward/downward axis). The output of ArUco /SolvePnP gives me rvecs which contains rotation vector. Now, I really don't understand how this rotation vector represents the angle of rotation. I can convert it to rotation matrix using Rodrigues but still i don't get the actual roll, yaw, pitch angles (or rotation along x,y and z axes) which i really need.
So, can anyone please explain how to manipulate rotation using the rotation vector in rvecs and also how to get simple rotation angles along the three axes from them. Thanks
Note: it is also possible to compute the Euler angles directly from the Rodrigues (axis-angle) rotation vector. You just need to find the correct equations.
Check out Euclidean Space. It has explanations for, and equations to get from anything to just about anything.
I tried with Matrix to Euler conversion as implemented in the link shared by @Eduardo. The attitude is giving angles fine which in my case is rotation along the z-axis (that is line between camera and marker). The other two angles' values are not making sense. I actually want the rotation angle along y-axis (that is the line going upwards or downwards). Any ideas where am I going wrong ?
It looks like you don't really understand what's being represented by these angles. Check out this page and try to understand what that means, and how to reverse it. The drawAxis function from the ARUCO can be a useful tool.
Yeah I'm a bit confused with these values as this is the first time I am doing pose estimation. My concept till now is that I need the rotation angle along y-axis to find the world coordinates of my camera. For this, I am trying to find the rotation along y-axis but I am getting the rotation angle along z-axis correctly. Please correct me if I am wrong in my concept or my approach for calculating camera's world coordinates. Also, I have gone through the articles you advised and I am already using the drawAxis function of ArUco. Thanks
Don't know whether you got your answer yet, but bear in mind that (1) solvePnP gives you the translation from the camera coordinate frame to the object coordinate frame as tvec, and the rotation from the camera frame to the object frame as rvec , and (2) - which always gets me - that OpenCV specifies the camera coordinate frame as +Z=forward=from camera sensor through lens into the world, +X=right, +Y=down
This question, comments, and answers should help point you in the right direction too: http://answers.opencv.org/question/92211/solvepnp-inconsistency-when-rotating-markers-on-a-plane/ (http://answers.opencv.org/question/92...)