Ask Your Question
0

Using solvePnp on a video stream with changing points.

asked 2017-06-26 12:02:42 -0600

antithing gravatar image

updated 2017-06-29 09:01:20 -0600

I am using a stereo camera to triangulate 3d points from rectified images. I then use these points, and the found keypoints, to run solvePnp. Everything runs, the 3d points look correct, the projected points look good. But the returned camera pose does not. It Jumps around, and gives incorrect values.

My workflow is:

grab stereo frames.

find keypoints in previous (left) frame, and current (left) and (right ) frames.

match the previous (left) frame with the current (left) frame.

match the (matched in previous step) left descriptors with the current(right) descriptors.

Triangulate points from matched stereo keypoints.

use Left camera keypoints and triangulated 3d points to run solvePnp.

invert the rvec and tvec values to get the camera pose.

repeat.

I have checked the 3d points in a 3d application, and am projecting them back to the camera frame. they look good.

I use the same keypoints that i triangulate with as the imagepoints, so the correspondences are good.

The 3d points are in Camera-space, as that is what triangulatePoints returns.

the calibration data is good.

I notice that even though I am matching the previous frame to the current, when i look at the 3d point setsfor consecutive frames, they do not align. For example, the first point in the set is in a different location, from frame 1 to frame 2.

The camera pose, inverted or not, jumps around between -1 and 1, and does not change as the camera moves.

What am i missing?

I have tried flipping the 3d points to object space every frame, adding the tvec and rvec every frame, and i see the same result.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-06-26 17:39:45 -0600

Tetragramm gravatar image

To get the additive value, you want to use THIS function.

HERE is a good presentation on what you're doing that should help with finding references and the terms you need to search for.

edit flag offensive delete link more

Comments

Thank you. i have added this, and also a function to flip my 3d points to world space. i have edited the question, if you have a moment, could you take a look?

antithing gravatar imageantithing ( 2017-06-27 04:09:33 -0600 )edit

Ok, let's debug this from the beginning.

Is the first set of points correct? IE, the first frame, you are at (0,0,0), and you see all these points out in the world. Are they approximately correct? All positive z, and they show up in the right place on the image when you use projectPoints?

Tetragramm gravatar imageTetragramm ( 2017-06-27 07:32:59 -0600 )edit

Yes, i am testing both with projectpoints, and by importing them into a 3d application. they look good.

antithing gravatar imageantithing ( 2017-06-28 04:33:17 -0600 )edit

I am almost certain that my issue is coming from the points being in CAMERA SPACE, instead of OBJECT SPACE. The code i have added above does not actually work for this. How can i flip the points to object space? thank you again.

antithing gravatar imageantithing ( 2017-06-28 04:35:09 -0600 )edit

Do you have an external reference that you use as 0,0,0? If not, then there's no true difference between camera and object space.

Your code is too fragmented for me to follow the data flow, but it should go like this:

  1. Camera is a (0,0,0) or some initial value based on a reference point
  2. Capture image and use camera pose to find 3d locations of points
  3. Capture image and find keypoint matches between previous frame and this
  4. With the previous 3d points, and new 2d points, use solvePnP, which gives current pose (in same coordinate frame you started with)
  5. Go to 3 and repeat.
Tetragramm gravatar imageTetragramm ( 2017-06-28 17:38:17 -0600 )edit

I have edited my question to show my workflow. What am i still missing? thanks again for your time.

antithing gravatar imageantithing ( 2017-06-29 05:32:56 -0600 )edit
1

Ah, I see the problem. You need to use triangulate points using the projection matrices from the previous frame.

Secondly, don't do a separate solvePnP for both cameras, or they'll start drifting. Do it just for the one that calibrates as the origin, then apply the transformation you get for the second camera to the results from solvePnP to get it's location.

Thirdly, each frame, use the results from solvePnP to create new projection mats to use with triangulate points.

Tetragramm gravatar imageTetragramm ( 2017-06-29 17:49:22 -0600 )edit

Ah of course! Triangulating using updating projection matrices so that the points are created in the proper space. THANK YOU! :)

antithing gravatar imageantithing ( 2017-06-30 04:15:35 -0600 )edit

... I have this working well, after much help from here: http://answers.opencv.org/question/162932/create-a-stereo-projection-matrix-using-rvec-and-tvec/ Thank you again for your time. It is very much appreciated.

antithing gravatar imageantithing ( 2017-06-30 11:19:39 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-06-26 12:02:42 -0600

Seen: 920 times

Last updated: Jun 29 '17