# Revision history [back]

I agree Eduardo, it's better to use directly the Kinect2 libraries (Kinect 2 SDK or libFreenect2) to acquire the images and convert them manually to openCV Mat.

The Kinect2 is quite "dumb", you start it and read the images in a loop. You can't set image or depth resolution, format, luminosity, etc, so no real need for OpenNI. Starting the streams is quite simple with the libraries. The only advantage of OpenNI is that it's device-independent; but anyway, the data provided by Kinect2 is different from the Kinect1/Primesense sensors.

If you are not using Windows 8/10, you must use the libFreenect2. It's quite stable and complete. Build the library and test if it works using the Protonect utility.

On Windows 8 or 10 you can also use the Kinect 2 SDK. Install it and test the examples to see if it's working.

Then, start with an example project to capture the frames. When you get the image buffer, just convert it to Mat. Something like this (not real code, juts to get the idea):

float *data;
data = captureDepthFrame();
Mat depthframe(512,424,CV_32F,data);


In the case of using LibFreenect2, it should look like:

...
listener.waitForNewFrame(frames);
Frame *rgb = frames[Frame::Color];
Frame *depth = frames[Frame::Depth];
Mat rgbMat(rgb.height,rgb.width,CV_8UC4,rgb.data); //RGB image, HD resolution
Mat depthMat(depth.height,depth.width,CV_32F,depth.data); //depth image in mm, 512x424
...