Ask Your Question

Calculate odometry from camera poses

asked 2017-03-24 08:28:27 -0600

b2meer gravatar image

I am doing pose estimation using ArUco markermaps. I followed the following procedure:

  1. Created a markermap by printing and pasting markers in a room and recording their video and using it to create marker map (map.yml, map.log , and the point cloud file)
  2. Use the generated markermap to run aruco tracking program. The tracking program outputs camera poses in real time in the form of three tvec and three rvec values.

Now, I want to keep record about how much the camera has moved in x and y positions (2d only) using the cam poses, so that I can generate my odometry on its basis. I am not sure how to interpret or use the tvec and rvec values for this purpose. Any ideas on this ? Thanks.

Note: I am a beginner in pose estimation, so If I seem to be going wrong in my approach, please mention it.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2019-06-20 04:33:52 -0600

d0n13 gravatar image

I have applied the same code to my Aruco project to get the pose of the camera in a suitable form for SceneKit. It kinda works. The camera appears in the correct place relative to a node (box) I've placed in the scene. But it moves about a lot as if the pose is very unstable.

Within the openCV part I have drawn boxes around the tracking marker and they are completely stable.

Have you any idea what might be going wrong here?

// Create a SceneKit camera pose to send back to our model
-(void) updateCameraProjection:(ArucoPayload *)payload withRotation:(Vec3d)rvec andTranslation:(Vec3d)tvec {

Mat RotX(3, 3, cv::DataType<double>::type);
setIdentity(RotX);<double>(4) = -1; //cos(180) = -1<double>(8) = -1;

Mat R;
Rodrigues(rvec, R);
R = R.t();  // rotation of inverse
Mat rvecWorld, rvecSceneKit;
Rodrigues(R, rvecWorld); // rvec in world coords
rvecSceneKit = RotX * rvecWorld; // rvec scenekit

Mat tvecWorld = -R * (Mat)tvec, tvecSceneKit; // tvec world
tvecSceneKit = RotX * tvecWorld; // tvec SceneKit

// Apply rotation
Mat rotation;
Rodrigues(rvecSceneKit, rotation);

Mat transform_matrix;
transform_matrix.create(4, 4, CV_64FC1);
transform_matrix( cv::Range(0,3), cv::Range(0,3) ) = rotation * 1;<double>(0, 3) =<double>(0,0);<double>(1, 3) =<double>(1,0);<double>(2, 3) =<double>(2,0);<double>(3, 3) = 1;

payload.cameraTransform = [self transformToSceneKitMatrix:transform_matrix];

-(SCNMatrix4) transformToSceneKitMatrix:(cv::Mat&) openCVTransformation {

SCNMatrix4 scnCameraPose = SCNMatrix4Identity;

scnCameraPose.m11 = (float)<double>(0, 0);
scnCameraPose.m12 = (float)<double>(1, 0);
scnCameraPose.m13 = (float)<double>(2, 0);
scnCameraPose.m14 = (float)<double>(3, 0);

scnCameraPose.m21 = (float)<double>(0, 1);
scnCameraPose.m22 = (float)<double>(1, 1);
scnCameraPose.m23 = (float)<double>(2, 1);
scnCameraPose.m24 = (float)<double>(3, 1);

scnCameraPose.m31 = (float)<double>(0, 2);
scnCameraPose.m32 = (float)<double>(1, 2);
scnCameraPose.m33 = (float)<double>(2, 2);
scnCameraPose.m34 = (float)<double>(3, 2);

//copy the translation row
scnCameraPose.m41 = (float)<double>(0, 3);
scnCameraPose.m42 = (float)<double>(1, 3);
scnCameraPose.m43 = (float)<double>(2, 3);
scnCameraPose.m44 = (float)<double>(3, 3);

return scnCameraPose;


edit flag offensive delete link more

answered 2017-03-24 18:56:18 -0600

Tetragramm gravatar image

Well, tvecs is the x, y, and z translation from the origin in whatever units.

The rvecs is made up of Rodrigues Parameters. You can use the Rodrigues function to convert it to a rotation matrix.

OpenCV's solvePnP or aruco::estimatePoseBoard return tvec and rvec that are the pose of the object relative to the camera. So the camera is the origin and the axes are as described HERE.

To reverse that, and make get the camera's pose relative to the board, use this code:

Mat R;
Rodrigues(rvec, R);
R = R.t();
tvec = -R*tvec;
Rodrigues(R, rvec);

Now, what the rvecs and tvecs your program returns are relative to, I have no idea. You'd have to look at it's documentation.

Of course, Odometry is just adding up relative measurements, to get an estimate of the absolute measurement. You have absolute measurements, so I don't know what exactly you're trying to do.

edit flag offensive delete link more


Thanks for the detailed answer. I am using the aruco's own tracker whose readme says that it tracks and provides camera poses. I am using aruco from the following link I have already created a markermap of my environment and I am running the file aruco_test_markermap that comes with the package. You can also find it's cpp file by downloading the package from above link. It is present in the path aruco-2.0.19/utils_markermap/aruco_test_markermap.cpp Apparently, it returns me the camera poses. But I can't use the tvec_x and tvec_z as my camera's x and y locations respectively. Can you guide me how to interpret these pose values and get my camera's X Y from them. Thanks

b2meer gravatar imageb2meer ( 2017-03-29 04:15:22 -0600 )edit

It seems to be returning camera pose, so do the reverse thing in my answer and then your x, y, and z should represent camera location in world coordinates.

Tetragramm gravatar imageTetragramm ( 2017-03-29 18:19:03 -0600 )edit

Great ! I did it and it worked. Now I am getting world camera coordinates. Thanks for your help.

b2meer gravatar imageb2meer ( 2017-03-30 05:07:04 -0600 )edit

Question Tools

1 follower


Asked: 2017-03-24 08:28:27 -0600

Seen: 831 times

Last updated: Mar 24 '17