Hi all, I am new to opencv, and am attempting to use feature matching to construct 3d points from a live video stream, and eventually, find the camera pose.
So I am successfully finding the corners, using Eigenvalue Corner Detection. I have calibrated my camera. and have the distortion/Extrinsic_Parameter data.
Now though, I am stuck. How do I transfer the tracked corners into 3d point values?
Do i need to go --> corners > image points > object points? Or am i going about this the wrong way.
thanks for your help!