Ask Your Question
0

Using the RGBDOdometry sample correctly

asked Mar 13 '14

Hi everyone, playing around with how to find the egomotion of a camera, particularly a kinect sensor. One approach I've looked at is the sample packed with opencv, 'rgbdodometry' which sprouts from this paper: https://vision.in.tum.de/_media/spezial/bib/steinbruecker_sturm_cremers_iccv11.pdf I believe.

However, other than that I can find very little documentation on it. It seems to return the projection matrix [R|t] between the camera views?

In an attempt to extract these values I tried:

bool isFound = cv::RGBDOdometry(Rt, oldRt, image0, depthFlt0, mask0, image1, depthFlt1, mask1, cameraMatrix, minDepth, maxDepth, maxDepthDiff, iterCounts, minGradMagnitudes, transformationType );
Mat rot, trans, rX, rY, rZ, rE, K;
Mat P = Rt.rowRange(0,3);
cout << P << endl;
decomposeProjectionMatrix(P, K, rot, trans, rX, rY, rZ, rE);
cout << "Rotated [x,y,z]: " << rE << endl << "Tranformed [x,y,z]: " << trans << endl;

On the hoof that it might work, but I'm pretty sure it doesn't. The camera matrix K that is returned, is nothing like the cameraMatrix I use with the RGBDOdometry function (calibrated before).

I have read the full projection matrix is K[R|t], which would make sense why I am getting an invalid K, if I only had [R|t] to begin with!

Ideally, all I'm looking for now is a rough guess as to how the camera is moving, and I'd probably output that drawing the camera path in a separate window, (ignoring changes in Y).

Is this possible?

Preview: (hide)

1 answer

Sort by » oldest newest most voted
0

answered Mar 15 '14

kwaegel gravatar image

updated Mar 15 '14

I believe Rt is just a standard transform matrix describing the camera movement between the two frames. It's not a projection matrix.

I like to convert Rt to an Eigen::Affine3f for use with non-opencv libraries:

Eigen::Matrix4f mat;
cv::cv2eigen(matrix44, mat);
Affine3f incrementalAffine(RtMatrix);

If you want to estimate the total camera movement, you need to integrate the frame-to-frame movements as you go:

pose = incrementalAffine * pose;
Preview: (hide)

Comments

Thanks for the reply, so if Rt is just a transform from one image to the next, if I wanted to plot the camera's path as above, would it be enough to extract the final column 't' as the new relative position? Or is it in the form, rotate by R, then move along by t?

If so, do I need to extract euler angles from R first?

Sorry for the inundation of questions, I've been stuck on implementing an odometry algorithm for quite a while!

FraserT gravatar imageFraserT (Mar 17 '14)edit

Question Tools

1 follower

Stats

Asked: Mar 13 '14

Seen: 1,062 times

Last updated: Mar 14 '14