Ask Your Question
4

Opencv+kinect one(kinect for windows v2)+linux+libfreenect2?

asked 2015-11-17 02:12:19 -0600

theodore gravatar image

updated 2015-12-22 07:08:11 -0600

Hi guys, for a project that I got involved I will need to work with the new xbox one kinect sensor(be careful not the simple kinect from xbox 360 or microsoft kinect v2, though this one seems to be quite the same) and I was wondering if someone has managed to make it work under a linux environment. From a quick research it seems that still it is quite bleary if it possible to use it under linux and with opencv. It seems that libfreenect2 (openni seems to out of the road) might be able to provide some accessibility but I haven't still found something clear regarding a tut or an example in the net. Therefore, does anyone of you have any experience regarding this matter.

Thanks in advance.


Nothing ?

edit retag flag offensive close merge delete

Comments

I have no experience with Kinect 2 but as there are no answers yet, did you see this if it helps ?

Eduardo gravatar imageEduardo ( 2015-11-20 11:29:03 -0600 )edit

thanks @Eduardo I found also this page which seems promising. If I have any updates I will inform here.

theodore gravatar imagetheodore ( 2015-11-22 07:56:25 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
4

answered 2015-12-21 18:01:16 -0600

theodore gravatar image

Ok to answer my own question and after some weeks of research I found out that it is possible. First you need to go here and download/build and install the libfreenect2 library, bear in mind that this library is working only with Kinect one sensor, referred also as Kinect for windows 2. For linux most likely you will be able to find somewhere a package related to your package manager, for windows just follow the steps described in the README file and it should work without problem, for MacOS unfortunately I cannot tell since I do not acquire machine based on MacOS.

Once you have installed the libfreenect2 library in addition to its dependencies (described in the README file, e.g. libusb, libjpeg-turbo, etc...) and opencv you are ready to go. Bellow a code snippet how to show the output for rgb, depth, ir, and rgbd (both from rgb to dept and vice versa) using opencv:

//! [headers]
#include <iostream>
#include <stdio.h>
#include <iomanip>
#include <time.h>
#include <tchar.h>
#include <signal.h>
#include <opencv2/opencv.hpp>

#include <libfreenect2/libfreenect2.hpp>
#include <libfreenect2/frame_listener_impl.h>
#include <libfreenect2/registration.h>
#include <libfreenect2/packet_pipeline.h>
#include <libfreenect2/logger.h>
//! [headers]

using namespace std;
using namespace cv;

enum Processor { cl, gl, cpu };

bool protonect_shutdown = false; // Whether the running application should shut down.

void sigint_handler(int s)
{
  protonect_shutdown = true;
}

int main()
{
    std::cout << "Hello World!" << std::endl;

    //! [context]
    libfreenect2::Freenect2 freenect2;
    libfreenect2::Freenect2Device *dev = nullptr;
    libfreenect2::PacketPipeline *pipeline = nullptr;
    //! [context]

    //! [discovery]
    if(freenect2.enumerateDevices() == 0)
    {
        std::cout << "no device connected!" << std::endl;
        return -1;
    }

    string serial = freenect2.getDefaultDeviceSerialNumber();

    std::cout << "SERIAL: " << serial << std::endl;
    //! [discovery]

    int depthProcessor = Processor::cl;

    if(depthProcessor == Processor::cpu)
    {
        if(!pipeline)
            //! [pipeline]
            pipeline = new libfreenect2::CpuPacketPipeline();
            //! [pipeline]
    } else if (depthProcessor == Processor::gl) {
#ifdef LIBFREENECT2_WITH_OPENGL_SUPPORT
        if(!pipeline)
            pipeline = new libfreenect2::OpenGLPacketPipeline();
#else
        std::cout << "OpenGL pipeline is not supported!" << std::endl;
#endif
    } else if (depthProcessor == Processor::cl) {
#ifdef LIBFREENECT2_WITH_OPENCL_SUPPORT
        if(!pipeline)
            pipeline = new libfreenect2::OpenCLPacketPipeline();
#else
        std::cout << "OpenCL pipeline is not supported!" << std::endl;
#endif
    }

    if(pipeline)
    {
        //! [open]
        dev = freenect2.openDevice(serial, pipeline);
        //! [open]
    } else {
        dev = freenect2.openDevice(serial);
    }

    if(dev == 0)
    {
        std::cout << "failure opening device!" << std::endl;
        return -1;
    }

    signal(SIGINT, sigint_handler);
    protonect_shutdown = false;

    //! [listeners]
    libfreenect2::SyncMultiFrameListener listener(libfreenect2::Frame::Color |
                                                  libfreenect2::Frame::Depth |
                                                  libfreenect2::Frame::Ir);
    libfreenect2::FrameMap frames;

    dev->setColorFrameListener(&listener);
    dev->setIrAndDepthFrameListener(&listener);
    //! [listeners]

    //! [start]
    dev->start();

    std::cout << "device serial: " << dev->getSerialNumber() << std::endl;
    std::cout << "device firmware: " << dev->getFirmwareVersion() << std::endl;
    //! [start]

    //! [registration setup]
    libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams());
    libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4), depth2rgb(1920, 1080 + 2, 4); // check here (https://github.com/OpenKinect/libfreenect2/issues/337) and here (https://github.com/OpenKinect/libfreenect2/issues/464) why depth2rgb image should be bigger
    //! [registration setup]

    Mat rgbmat, depthmat, depthmatUndistorted, irmat, rgbd, rgbd2;

    //! [loop start]
    while(!protonect_shutdown)
    {
        listener.waitForNewFrame(frames);
        libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color];
        libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
        libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth];
        //! [loop start ...
(more)
edit flag offensive delete link more

Comments

1

Thanks so much for answering this after you figured it out. I was just about to start trying to write this myself!

Rghamilton3 gravatar imageRghamilton3 ( 2016-01-11 00:25:37 -0600 )edit

So I understand why the division by 4500.0f is needed but where did you get that number? Thanks in advance!

Rghamilton3 gravatar imageRghamilton3 ( 2016-01-15 02:29:40 -0600 )edit

the Kinect presumably returns a 11bit value (0-2047), while new Kinect versions seem to return a 12bit value (0-4095). So, in the first case the max depth is 2048, and in the second 4096. I just used 4500.0f it does not make big difference for visualization purposes. If you want to be more correct just replace 4500.0f with 4096.0f and you should be seeing more or less the same thing. So, at the end you just normalize your values in order to be able to show them as an image.

theodore gravatar imagetheodore ( 2016-01-15 03:07:31 -0600 )edit

works great in linux as well

catch-twenty-two gravatar imagecatch-twenty-two ( 2016-08-12 19:15:56 -0600 )edit

Thanks again @theodore, for anyone else looking at this as answer for Linux I wrapped it in c++11 code and made the eclipse project availible on github. Here https://github.com/catch-twenty-two/l...

catch-twenty-two gravatar imagecatch-twenty-two ( 2017-01-07 23:33:20 -0600 )edit

@theodore and @catchy-twenty-two Thank you so much for selflessly providing the code. That's helpful for all of us. I wanted to do autonomous navigation and need to convert the depth data to laser scan so that I can use ROS http://wiki.ros.org/gmapping (gmapping). Do you have any suggestions about how should I go about it? And which version of openCV are you using?

pallav bakshi gravatar imagepallav bakshi ( 2017-01-24 12:50:51 -0600 )edit

I do not know anything about converting depth data to lazer. As for the opencv version, for this code I used 3.1 but it should be working with the latest build without issues.

theodore gravatar imagetheodore ( 2017-02-26 17:38:21 -0600 )edit

Hi there. How do u run this file on the terminal. I have git cloned the repository in my home directory. How should I run the file?

adilbonzi gravatar imageadilbonzi ( 2017-04-01 04:28:42 -0600 )edit

Hi @theodore Why didn't you chose to use microsoft sdk (i think you use a windows system)?

LBerger gravatar imageLBerger ( 2017-05-29 02:13:53 -0600 )edit

Main reason because I didn't want to be dependable on the microsoft platform and the windows OS. With libfreenect you can have a cross platform solution. In the beginning I was working with the kinect sdk but then I switched. (I use both windows and linux systems, my main system though is linux).

theodore gravatar imagetheodore ( 2017-05-29 11:23:58 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2015-11-17 02:12:19 -0600

Seen: 9,883 times

Last updated: Dec 22 '15