Hi, I've been using the Kinect with ROS and have been part of the answers.ros.org community for the last 6 months. Recently it came to my attention that in the later ROS versions much of the image processing/handling has been offloaded to other sources such as OpenCV. I know there is a library for communication messages between the two system so I'm hoping OpenCV can solve my problem.
I've got three Kinects which I've been using through ROS. I have the ability to access each of their depth/RGB streams. However, I want to use a point cloud visualizer (similar to RViz in ROS) and merge the live streams into one 3D stream using a universal coordinate frame. I've read about using checkerboards and have done intrinsic calibrations with them, but I need to find a package which can do it. Conceptually I know what to do but as far as coding, most of the repos I find are from 2011 and either deprecated or severely bugged.
Can anyone point me to tutorials or packages in OpenCV which might help? I'm using two model 1473's and a 1414, so I'm not sure if I can get the 1473's to play nice but I should at least be able to get two to work.