SolvePnP - How to use it?
Hi, I am doing some multiview geometry reconstruction with structure from motion. So far I am having having the following
- Two images as initial input
- Camera parameters and distortion coeff
- The working rectification pipeline for the initial input images
- Creation of a disparity map
- Creating a pointCloud from the disparity map with iterating over the disparity map and taking the value as z (x and y are the pixel coordinates of the pixel in the disparity map) (What is not working is reprojectImageTo3D as my Q matrix seems to be very wrong, but everything else is working perfectly)
This gives me a good pointcloud of the scene.
Now I need to add n more images to the pipeline. I've googled a lot and found the method solvePnP will help me.
But now I am very confused...
SolvePnP will take a list of the 3D points and the corresponding 2D image points and reconstruct the R and T vector for the third, fourth camera... and so on. I've read that the tho vectors need to be aligned, meaning that the first 3D point in the first vector corresponds to the first 2D point in the 2nd vector.
So far so good. But from where do I take those correspondances? Can I use this method reprojectPoints for getting those two vectors??? Or is my whole idea wrong using the disparity map for depth reconstruction? (Alternative: triangulatePoints using the good matches found before).
Can someone help me getting this straight? How can I use solvePnP to add n more cameras and therefore 3D Points to my pointcloud and improve the result of the reconstruction?