From a video, I have took three images: i1, i2 and i3. The steps in getting the keypoints in each of the image are:
- I detect keypoints from i1 and track it with optical flow up until i2.
- From i2, I added more keypoints (good keypoints from i1 still exist) and track it up until i3.
Then, from the corresponding keypoints in i1 and i2, I managed to build the 3d representation. So, using the same pipeline, of course I also managed to reconstruct the 3d representation from the corresponding keypoints in i2 and i3. So, I want to build a scene using these 2 reconstructed 3d scene.
I have done a little bit of reading, and I stuck in some parts. So, I know I will need to call the solvePnPRansac
.
I have done book keeping during optical flow and knows which keypoints in the reconstructed 3d scene (between i1 and i2) are present in i3. So, I will need just to pass the reconstructed 3d points with the corresponding i3's keypoints to solvePnPRansac
. From there, I can get the rotation and translation of the reconstructed 3d scene with respect to the i3's camera. From there, what should I do?