Depth data from sensor to z-buffer of OpenGL [closed]

asked 2015-05-12 15:36:44 -0500

Hi guys! I was wondering if you have any idea on a problem called occlusion handling.Lets say I use opencv for rendering a virtual object on top of a marker(Augmented Reality) tracked by a RGB-D sensor. Since I have depth map from my sensor for the scene I could use it to only render virtual object polygons that are not occluded(hidden) by my hand.

I read that I could mask the z-buffer of OpenGL with the values of the depth mat and in the end I will only have the closer ones.I don't really get though what happens with the values of pixels that belong to the hand of the user.Does anyone have any idea for possible detailed implementation or any open source project which does this?

edit retag flag offensive reopen merge delete

Closed for the following reason question is off-topic or not relevant by berak
close date 2015-05-12 15:44:08.357372


please , just understand, that (unfortunately) how to handle opengl's zbuffer (correctly) is pretty much off-topic here.

imho, you'll get a much better answer, asking on ##opengl in irc.freenode

berak gravatar imageberak ( 2015-05-12 15:42:19 -0500 )edit