Ask Your Question
0

depth sensor streaming in RGBD camera

asked 2014-10-10 16:18:40 -0600

SumitM gravatar image

updated 2014-10-10 16:20:38 -0600

Hi,

Does anyone know how to access depth sensor in RGBD camera using OpenCV, such as streaming or 3d reconstruction or point cloud. I am new to the field and do not understand many things, so bear with me. so far I have been using camera's SDK, but I have reached at a stage where using that may not be possible.any help will be greatly appreciated. thanks in advance! SumitM

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-10-14 06:52:30 -0600

R.Saracchini gravatar image

If you have compiled OpenCV with OpenNI, and the RGBD camera which you use is supported (i.e. Kinect/Asus), then you can use the cv::VideoCapture class to obtain RGBD data.

For example:

 cv::VideoCapture videoReader;
 videoReader.open( CV_CAP_OPENNI );
 //Set capture to 640x480 images at 30Hz 
 videoReader.set( CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, CV_CAP_OPENNI_VGA_30HZ );
 //Grabs current frame
 videoReader.grab();
 //retrieves RGB,Depth and mask data
 cv::Mat  rgbImg,dptImg,mskImg;
 videoReader.retrieve(rgbImg,CV_CAP_OPENNI_BGR_IMAGE );
 videoReader.retrieve(dptImg,CV_CAP_OPENNI_DEPTH_MAP);
 videoReader.retrieve(mskImg,CV_CAP_OPENNI_VALID_DEPTH_MASK);

See the openni example in the examples folder that come with OpenCV source code.

edit flag offensive delete link more

Comments

i have tried various ways to modify the code with all possible values for different variables, but for some reason it is not working........any suggestions what could I be doing wrong or any other way?

SumitM gravatar imageSumitM ( 2014-11-05 11:47:51 -0600 )edit
0

answered 2014-10-10 16:33:57 -0600

Which camera are you using? If it's a kinect or Asus sensor, the easiest way would be to use the pcl: http://pointclouds.org/documentation/tutorials/openni_grabber.php

Why can't you use the sdk and which sdk have you been using?

edit flag offensive delete link more

Comments

I have depthsense cameras and the same SDK. The problem is when I have to create disparity image for two color sensors, it cant be done real time, only way is to save images and process it later......I hope that explains my situation.........I desperately need any help I can get.......

SumitM gravatar imageSumitM ( 2014-10-10 17:21:10 -0600 )edit

And where exactly is your problem? Can't you save the depth images? Or read them?

FooBar gravatar imageFooBar ( 2014-10-11 03:25:19 -0600 )edit

I cant not read depth sensor data using OpenCV. SDK allows me to stream through RGB and D data. But if I am going for disparity map using color cameras and to combine this depth sensor data, but it does not work in SDK. That's why I need some other method where I can access depth sensor data and color data as well as switch streaming to one another as per requirement for my work. So for that I need some method to open depth sensor stream such as using OpenCV or something like that. Does that explain my situation?

SumitM gravatar imageSumitM ( 2014-10-11 12:43:34 -0600 )edit

Question Tools

Stats

Asked: 2014-10-10 16:18:40 -0600

Seen: 1,906 times

Last updated: Oct 14 '14