# Calculate odometry from camera poses

I am doing pose estimation using ArUco markermaps. I followed the following procedure:

1. Created a markermap by printing and pasting markers in a room and recording their video and using it to create marker map (map.yml, map.log , and the point cloud file)
2. Use the generated markermap to run aruco tracking program. The tracking program outputs camera poses in real time in the form of three tvec and three rvec values.

Now, I want to keep record about how much the camera has moved in x and y positions (2d only) using the cam poses, so that I can generate my odometry on its basis. I am not sure how to interpret or use the tvec and rvec values for this purpose. Any ideas on this ? Thanks.

Note: I am a beginner in pose estimation, so If I seem to be going wrong in my approach, please mention it.

edit retag close merge delete

Sort by » oldest newest most voted

I have applied the same code to my Aruco project to get the pose of the camera in a suitable form for SceneKit. It kinda works. The camera appears in the correct place relative to a node (box) I've placed in the scene. But it moves about a lot as if the pose is very unstable.

Within the openCV part I have drawn boxes around the tracking marker and they are completely stable.

Have you any idea what might be going wrong here?

// Create a SceneKit camera pose to send back to our model

Mat RotX(3, 3, cv::DataType<double>::type);
setIdentity(RotX);
RotX.at<double>(4) = -1; //cos(180) = -1
RotX.at<double>(8) = -1;

Mat R;
Rodrigues(rvec, R);
R = R.t();  // rotation of inverse
Mat rvecWorld, rvecSceneKit;
Rodrigues(R, rvecWorld); // rvec in world coords
rvecSceneKit = RotX * rvecWorld; // rvec scenekit

Mat tvecWorld = -R * (Mat)tvec, tvecSceneKit; // tvec world
tvecSceneKit = RotX * tvecWorld; // tvec SceneKit

// Apply rotation
Mat rotation;
Rodrigues(rvecSceneKit, rotation);

Mat transform_matrix;
transform_matrix.create(4, 4, CV_64FC1);
transform_matrix( cv::Range(0,3), cv::Range(0,3) ) = rotation * 1;
transform_matrix.at<double>(0, 3) = tvecSceneKit.at<double>(0,0);
transform_matrix.at<double>(1, 3) = tvecSceneKit.at<double>(1,0);
transform_matrix.at<double>(2, 3) = tvecSceneKit.at<double>(2,0);
transform_matrix.at<double>(3, 3) = 1;

}

-(SCNMatrix4) transformToSceneKitMatrix:(cv::Mat&) openCVTransformation {

SCNMatrix4 scnCameraPose = SCNMatrix4Identity;

scnCameraPose.m11 = (float) openCVTransformation.at<double>(0, 0);
scnCameraPose.m12 = (float) openCVTransformation.at<double>(1, 0);
scnCameraPose.m13 = (float) openCVTransformation.at<double>(2, 0);
scnCameraPose.m14 = (float) openCVTransformation.at<double>(3, 0);

scnCameraPose.m21 = (float)openCVTransformation.at<double>(0, 1);
scnCameraPose.m22 = (float)openCVTransformation.at<double>(1, 1);
scnCameraPose.m23 = (float)openCVTransformation.at<double>(2, 1);
scnCameraPose.m24 = (float)openCVTransformation.at<double>(3, 1);

scnCameraPose.m31 = (float)openCVTransformation.at<double>(0, 2);
scnCameraPose.m32 = (float)openCVTransformation.at<double>(1, 2);
scnCameraPose.m33 = (float)openCVTransformation.at<double>(2, 2);
scnCameraPose.m34 = (float)openCVTransformation.at<double>(3, 2);

//copy the translation row
scnCameraPose.m41 = (float)openCVTransformation.at<double>(0, 3);
scnCameraPose.m42 = (float)openCVTransformation.at<double>(1, 3);
scnCameraPose.m43 = (float)openCVTransformation.at<double>(2, 3);
scnCameraPose.m44 = (float)openCVTransformation.at<double>(3, 3);

return scnCameraPose;


}

more Well, tvecs is the x, y, and z translation from the origin in whatever units.

The rvecs is made up of Rodrigues Parameters. You can use the Rodrigues function to convert it to a rotation matrix.

OpenCV's solvePnP or aruco::estimatePoseBoard return tvec and rvec that are the pose of the object relative to the camera. So the camera is the origin and the axes are as described HERE.

To reverse that, and make get the camera's pose relative to the board, use this code:

Mat R;
Rodrigues(rvec, R);
R = R.t();
tvec = -R*tvec;
Rodrigues(R, rvec);


Now, what the rvecs and tvecs your program returns are relative to, I have no idea. You'd have to look at it's documentation.

Of course, Odometry is just adding up relative measurements, to get an estimate of the absolute measurement. You have absolute measurements, so I don't know what exactly you're trying to do.

more

Thanks for the detailed answer. I am using the aruco's own tracker whose readme says that it tracks and provides camera poses. I am using aruco from the following link https://sourceforge.net/projects/aruco/. I have already created a markermap of my environment and I am running the file aruco_test_markermap that comes with the package. You can also find it's cpp file by downloading the package from above link. It is present in the path aruco-2.0.19/utils_markermap/aruco_test_markermap.cpp Apparently, it returns me the camera poses. But I can't use the tvec_x and tvec_z as my camera's x and y locations respectively. Can you guide me how to interpret these pose values and get my camera's X Y from them. Thanks

It seems to be returning camera pose, so do the reverse thing in my answer and then your x, y, and z should represent camera location in world coordinates.

Great ! I did it and it worked. Now I am getting world camera coordinates. Thanks for your help.

Official site

GitHub

Wiki

Documentation