Ask Your Question
0

resize kinect color stream to match depth.

asked 2015-06-10 15:26:32 -0600

stillNovice gravatar image

updated 2015-06-10 15:30:13 -0600

I am streaming data from the kinect for windows v2 into open cv.

I have the color coming in at 960 x 540, but for some reason the depth images are at the strange size of 512 x 424.

I need to process these images by matching pixels one to the other, to triangulate world coordinates. For this I need the images to be the same size, and the pixels to match up.

So, what is the best way to resize the color image to match the depth? This: (psuedo code)

cv::Size size (depth.width, depth.height); cv::resize(color, color, size);

Gives an image that doesn't match, due, I think, to the two different lenses on the cameras.

So, how can I remap the color image to the same size/resolution as the depth, in real time?

Thanks.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-06-11 04:42:28 -0600

R.Saracchini gravatar image

Well, I'm never worked with Kinect 2, but I suppose that it is very similar to the original Kinect. The fact is that the Kinect has 2 cameras (one IR and another RGB), and it generates it depth map by projecting a pattern over the scene using a IR projector and computing the depths by using the input of the IR camera.

That said, the raw data from the RGB camera wont be aligned with the depth data because the cameras are distinct. It is not a mere matter of just resizing the images. The depth map has to be properly mapped , or better yet, registrated. This means that you have to re-project the depth points into the RGB camera point of view and assign the color

As I said, I'm not sure how you capture the data from the Kinect 2. If you are using the OpenNI backend of OpenCV, you can do just as I explained in this another question HERE, and the depth map will be properly registered with the RGB image.

If the way that you are extracting this data from the Kinect is not this, you must determine the camera pose of the RGB camera relative to the IR camera pose to do the reprojection.

Calibrating the cameras (only once !):

Calibrate both cameras as a stereo pair, using cv::stereoCalibrate with the camera matrices that you computed in the previous step and a calibration pattern viewed by the same cameras. You will obtain the rotation, translation needed to re-project the points. If you have the camera matrices of the RGB and IR camera a priori, it will be much better. The ones from Kinetic 1 are quite well known and can be found in internet. I believe that it is simple to find the ones from Kinetic 2 at this point...

Colouring the depth map (for every frame):

With the procedure above, you have the rotation and translation matrices and the camera matrix of the RGB camera. All that you have to do is project the 3D depth point into the RGB camera space using cv::projectPoints, and assign the colour of the pixel (or interpolation of the nearby pixels). Note that some points will be colourless since they aren't visible by the RGB camera.

edit flag offensive delete link more

Comments

Nice answer!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-06-11 08:43:17 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-06-10 15:26:32 -0600

Seen: 2,680 times

Last updated: Jun 11 '15