Auto rotation of two point clouds from stereo rig
I built this stereo camera rig and eventually I was able to get to the point where I could extract pretty decent point clouds from it. The point cloud comes from: cv2.reprojectImageTo3D(disparity, disparity_to_depth_map) and when I merged it with color data it looks like a faithful 3D representation of my desk when I view it. Now I want to learn how to stitch multiple point clouds together automatically. Basically I want to take pictures from multiple angles and stitch them together. From reading it seems that if I can get them close enough then I can use ICP on them to merge them together. I know that I could do the initial setup manually but I want to try to do it with my own code.
So I was thinking that I could calculate the optical flow of some feature points between the first set of stereo images and the second. Then since I know the depth at each point I could use solvePNP to get the pose of the first and second pictures. And then I could just subtract the rotation and translation vectors to understand how to rotate my point cloud. Does this make sensor or I am missing something here?
I also added an IMU to my camera so I could try to track movement that way, but I think it would drift too much.
Thanks!
without scale, you can try sfm module
Maybe the registration module from the PCL library is better suited for this task.