Ask Your Question

Key-Frame VO Key-Point Data Fusion

asked 2016-11-19 13:48:15 -0600

I have a key-point based visual odometry routine which accepts as input an RGB-D frame. Successive image are tracked to each other and a cumulative rotation and translation is maintained. In this current form, significant drift occurs. I intend to transition this routine to make use of key-frames, whereby until some sufficient displacement has occurred to necessitate a new key-frame, new RGB-D frames are tracked to the most recent key-frame. Key-frames should significantly reduce drift and are useful for further processing if so desired (m-frame bundle adjustment, etc.).

My question is pretty fundamental. Assume I have performed tracking (key-point matching and PNP) and have [R|t] for the current frame to the current key-frame. Now, given a key-point pair, one in the key-frame and one in the current frame, each with 3D position and uncertainty/covariance, how can I fuse the new data into the key-frame data? Of course, there are many papers that dance around this and take it for granted, but for someone new to this sort of thing, I am having trouble finding a source that offers a good explanation (this might even come from radar systems).

edit retag flag offensive close merge delete


So, you're trying to update the R|t from current to key frame? Why not just add the correspondence to the set you're using to get the R|t you have so far? Maybe I just don't understand by what you mean when you say "fuse it into the key-frame data". Wouldn't the R|t be the current frame's data because the key-frame is the reference?

Tetragramm gravatar imageTetragramm ( 2016-11-19 14:19:54 -0600 )edit

By fuse, I mean to fuse the estimated position and uncertainty of the current keypoint into its corresponding keypoint in the keyframe. The fused measurement then reflects the most probable position and the combined uncertainty of all fused measurements to that keypoint. I would expect this to mean that as multiple frames are tracked to a single keyframe, the uncertainty in the position of the keypoints would decrease and the estimated locations would more closely resemble their true positions.

Der Luftmensch gravatar imageDer Luftmensch ( 2016-11-19 14:54:10 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2016-11-19 19:06:44 -0600

Tetragramm gravatar image

There are a lot of ways, but a Kalman filter is the easiest. Set up a 3d kalman filter with either Position, Position and Velocity, or Position Velocity and Acceleration, depending on what the point is doing.

Your key frame's uncertainty is the error cov Pre and post initialization, and the position is the state pre/post initialization. Then each new position is the measurement, and new uncertainty is the measurement covariance.

By the end of a sequence, the kalman error should be smaller than your measurement error.

HERE is an example of somebody using a 2d position and velocity one. It should be enough to get you started. Googling Kalman Filter will give lots of class lectures and things if you want the full theory.

edit flag offensive delete link more


Thanks, you are right. Assuming a static scene, updating the keypoints would simply be the update step of the kalman filter.

Der Luftmensch gravatar imageDer Luftmensch ( 2016-11-22 09:33:46 -0600 )edit

Question Tools

1 follower


Asked: 2016-11-19 13:48:15 -0600

Seen: 389 times

Last updated: Nov 19 '16