Auto rotation of two point clouds from stereo rig

asked 2020-01-30 13:06:55 -0600

eric_engineer gravatar image

updated 2020-01-30 13:07:26 -0600

I built this stereo camera rig and eventually I was able to get to the point where I could extract pretty decent point clouds from it. The point cloud comes from: cv2.reprojectImageTo3D(disparity, disparity_to_depth_map) and when I merged it with color data it looks like a faithful 3D representation of my desk when I view it. Now I want to learn how to stitch multiple point clouds together automatically. Basically I want to take pictures from multiple angles and stitch them together. From reading it seems that if I can get them close enough then I can use ICP on them to merge them together. I know that I could do the initial setup manually but I want to try to do it with my own code.

So I was thinking that I could calculate the optical flow of some feature points between the first set of stereo images and the second. Then since I know the depth at each point I could use solvePNP to get the pose of the first and second pictures. And then I could just subtract the rotation and translation vectors to understand how to rotate my point cloud. Does this make sensor or I am missing something here?

I also added an IMU to my camera so I could try to track movement that way, but I think it would drift too much.

Thanks!

edit retag flag offensive close merge delete

Comments

without scale, you can try sfm module

LBerger gravatar imageLBerger ( 2020-01-31 01:28:59 -0600 )edit

Maybe the registration module from the PCL library is better suited for this task.

kbarni gravatar imagekbarni ( 2020-01-31 06:44:16 -0600 )edit