Pose estimation

asked 2017-06-30 03:28:33 -0600

snehal gravatar image

updated 2017-06-30 05:03:11 -0600

Hello, My goal is to find camera position with removing the scale drift to obtain trajectory. For that camera pointed down-ward, but not directly at the ground surface from that i captured images with t and t+1 time . After that i am find feature point using FAST algorithm and track that point using optical flow. So I have now two sets of 2D feature point(image point). Then to find depth for remove scale drift i used below link.

( http://fsr.utias.utoronto.ca/submissi... )

from that i find 3D coordinate (x, y, z) w.r.t camera coordinate.

Now my question is after getting 2D(image point) and their corresponding 3D(object point) can i find camera pose(R and t) using OpenCV function solvePnPRansac()? But my understanding solvePnPRansac() gives object pose, its not giving camera pose. I am confused. I want pose between two frames Is there any solution or function for 2D points and their corresponding 3D points i can get camera pose or from 3D to 3D how can i get pose? sorry for my bad English and thank you in advance!

edit retag flag offensive close merge delete

Comments

Check out the cv2.decomposeProjectionMatrix() function to inverse the projection matrix.

You can also used the rotation matrix from cv2.Rodrigues() :
rmat = cv2.Rodrigues(rvec)[0]

Then, camera position expressed in world coordinates is given by:

cam_pos = -np.matrix(rmat).T * np.matrix(tvec)

swiss_knight gravatar imageswiss_knight ( 2017-06-30 15:33:45 -0600 )edit