using result from triangulatePoints in solvePnp

asked 2017-06-27 05:36:07 -0600

antithing gravatar image

I am using triangulatePoints to get 3d points, then sending these points, with the corresponding keypoints, to solvePnp.

The resulting 3d points from triangulation are in camera space.

SolvePnp requires (i believe) points in world space. is this correct?

If so, how can i do this conversion?

thank you.

edit retag flag offensive close merge delete

Comments

Yes solvePnP needs object points expressed in the object frame as it estimates the camera pose (the matrix to transform 3D points from object frame to camera frame).

How would you plan to use the keypoint part? If you plan to match keypoints in the current frame to those detected in a training frame from which you know the corresponding 3D object points, it is the classical situation.

Otherwise, can you get the corresponding 3D point in the object frame from a 3D point expressed in the camera frame? If so, maybe a (stupid?) solution would be to solve the 3D/3D problem as a classical least square problem?

Eduardo gravatar imageEduardo ( 2017-06-27 13:36:44 -0600 )edit

I currently have stereo points triangulated per frame, and keypoints matched frame to frame. I was planning to solvePnp, then use the resulting matrix to 'flip' the points to object space before every run. So it would be:

match points and triangulate
use current camera pose to convert 3d points to object space
run solvePnp using these points and the current keypoints

repeat

I am hoping this will keep the solved result smooth across changing points, as the camera moves. Is this valid? Can you explain the 3d/3d problem for me? Thanks!

antithing gravatar imageantithing ( 2017-06-27 14:33:09 -0600 )edit

... to be clear: Unlike using a fiducial marker to solvePnp, the 3d points will be constantly changing as the camera moves.

antithing gravatar imageantithing ( 2017-06-27 14:34:12 -0600 )edit

Short answer: if at some point you don't have the transformation between the object frame to the camera frame or you don't have the correspondences 2D images points / 3D object points to run solvePnP, you will not be able to get the coordinates in the object frame.

As you have point clouds in the camera frame across the time, you can estimate the camera displacement or the transformation between each camera frame at time t, t+1, t+2, etc. It is similar to an ICP.

Eduardo gravatar imageEduardo ( 2017-06-28 07:01:41 -0600 )edit

I have looked at ICP, I am hoping to use pnp, as it is more accurate. IS is possible to use the workflow i outline above? How can i flip a vector of Point3f from camera to object space? thanks again for your time.

antithing gravatar imageantithing ( 2017-06-28 08:11:42 -0600 )edit

I have looked at ICP, I am hoping to use pnp, as it is more accurate. Is it possible to use the workflow i outline above? How can i flip a vector of Point3f from camera to object space? thanks again for your time.

antithing gravatar imageantithing ( 2017-06-28 08:11:54 -0600 )edit