Ask Your Question

Revision history [back]

Depth data from sensor to z-buffer of OpenGL

Hi guys! I was wondering if you have any idea on a problem called occlusion handling.Lets say I use opencv for rendering a virtual object on top of a marker(Augmented Reality) tracked by a RGB-D sensor. Since I have depth map from my sensor for the scene I could use it to only render virtual object polygons that are not occluded(hidden) by my hand.

I read that I could mask the z-buffer of OpenGL with the values of the depth mat and in the end I will only have the closer ones.I don't really get though what happens with the values of pixels that belong to the hand of the user.Does anyone have any idea for possible detailed implementation or any open source project which does this?