Ask Your Question
0

Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space?

asked 2017-03-13 08:04:22 -0600

yorkhuang gravatar image

All the demo video of pose estimation using solvePnP() given in various posts exhibit a wireframe coordinate system or a wireframe object on top of the target image only. Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space? My main confuse is the return values from solvePnP() are the rotation and translation of the object in camera coordinate system. Can we actually use return values to compute camera pose w.r.t. the object in the world space? I been searching this answer for over two months. Can anyone help me? Thanks,

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-03-13 17:46:14 -0600

Tetragramm gravatar image

The conversion between them is simple. Invert the rotation (transpose), then the translation is the negative of the rotated translation.

 Mat R;
Rodrigues(rvec, R);
R = R.t();
tvec = -R*tvec;
Rodrigues(R, rvec);
edit flag offensive delete link more

Comments

Hi, Tetragramm, thank you for your answer. The answer you gave is what I found from various post. I just wonder if anyone actual use this method to position himself within a space? Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-13 21:28:48 -0600 )edit

Yes. See the ARUCO module.

Tetragramm gravatar imageTetragramm ( 2017-03-13 23:10:24 -0600 )edit

Thank you, Tetragramm. Are you referring this web page? https://www.uco.es/investiga/grupos/ava/node/26 (https://www.uco.es/investiga/grupos/a...) Thank you for your information. I will check the web site.

yorkhuang gravatar imageyorkhuang ( 2017-03-13 23:50:10 -0600 )edit

Yes, also the ARUCO module of OpenCV. They works using solvePnP, and if you want to know the position of the object relative to the camera, that's the code to reverse it. ARUCO is often used for both finding the location of the camera, and for finding the location of an object relative to the camera.

Tetragramm gravatar imageTetragramm ( 2017-03-14 17:19:41 -0600 )edit

Hi, Teragramm, thank you for your enthusiastic help. I just check the official site and ARUCO is available for OpenCV 3.2-dev only and I am using OpenCV 2.4.13.2. Is there any difference in pose result from sovePnP() between these two versions? ARUCO is a marker based solution while my study is natural feature tracking-based. So, I have two questions and hope you can help me. First, does the number of feature points detected will affect the result of solvePnP()? In other words, what is the minimum number of feature points detected is require for accurate pose estimation? Secondly, is ARUCo use meter or centimeter as the unit for 3D coordinates? Thank again for your help!

yorkhuang gravatar imageyorkhuang ( 2017-03-14 23:46:19 -0600 )edit

There's no difference in pose. ARUCO just defines the points that feed into the solvePnP algorithm.

More points are better, the more precise the better, but an absolute minimum of 4.

ARUCO allows you to choose the units based on how big the markers are. For example look at the CHARUCO board create function. You specify how large the markers are. So if you say the marker is 5 cm across, then you put in 5 for marker size, and you get your pose in cm.

Tetragramm gravatar imageTetragramm ( 2017-03-15 17:12:46 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-03-13 08:04:22 -0600

Seen: 2,094 times

Last updated: Mar 13 '17