Ask Your Question

CaptainSwagpants's profile - activity

2015-10-03 02:45:40 -0600 received badge  Student (source)
2015-08-22 07:46:42 -0600 asked a question How does the OpenNI point cloud mode work? Calibration+Coordinates

Hi,

I'm using OpenCV with OpenNI and a kinect sensor to track a tennis ball. I'd like to use the depth sensor in OpenNI's XYZ mode, which outputs (X,Y,Z) in meters for a given pixel(Xp,Yp) in a 32U3C Mat. In order for me to be able to use that in my thesis I need to know how it works. Specifically the equations and calibration values used to transform (X,Y) from pixels to meters for a given Z(known in meters before transformation). Also I've been wondering what the origin of the resulting coordinate system is. Is it the depth sensor?

I have not been able to find proper documentation about this and sifting through the source code hasn't procured much information either.

2015-08-19 09:03:55 -0600 commented answer Kinect RGB and depth frame

Hey, piggybacking this: Do you know how OpenNI generates the point cloud? It's hard to find documentation about it. What is the origin of the coordinate system? Is it the IR Sensor? How does OpenNI generate the X and Y values in meters? There has to be some sort of internal calibration, to calculate (X,Y)meters from (X,Y)pixels+(Z)meters. I can't use the point cloud feature in my bachelors thesis unless I know how it works. Thanks in Advance :)