Ask Your Question

Revision history [back]

How does the OpenNI point cloud mode work? Calibration+Coordinates

Hi,

I'm using OpenCV with OpenNI and a kinect sensor to track a tennis ball. I'd like to use the depth sensor in OpenNI's XYZ mode, which outputs (X,Y,Z) in meters for a given pixel(Xp,Yp) in a 32U3C Mat. In order for me to be able to use that in my thesis I need to know how it works. Specifically the equations and calibration values used to transform (X,Y) from pixels to meters for a given Z(known in meters before transformation). Also I've been wondering what the origin of the resulting coordinate system is. Is it the depth sensor?

I have not been able to find proper documentation about this and sifting through the source code hasn't procured much information either.