Ask Your Question
2

3D location of detected keypoints

asked 2013-07-31 13:46:52 -0600

andrei.toader gravatar image

Hi all,

I want to know if it's possible to get the location of the keypoints detected by any featuredetector. To be explicit enough, i want to know the position in 3D (the reference should be the camera viewpoint) of the detected keypoints. If it's possible, i would appreciate if someone can give me some hints regarding this. I tried to search about but i haven't found any clues.

Thanks in advance, Andrei

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-07-31 16:46:43 -0600

With one camera is more or less impossible (or very difficult in some context). Because, the depth is loss in the 2D sensor of the camera. Finding depth, with a unique camera (if it's what you want) is very difficult, and often not accurate. You could Google it, but to the best of my knowledge, there is not such think in OpenCV. If you find a good algorithm, and implement it, make a pull request... ;-)

But you could use two cameras. With stereo matching approach, you could estimate the 3D position of your keypoint. I think you don't have to compute the keypoints on both image, just one, and reproject them in 3D. See the stereo doc here, and the samples in cpp/stereo_calib.cpp cpp/stereo_match.cpp.

edit flag offensive delete link more

Comments

Well, the problem is that the app should be for android, therefore one camera. Edit. The current app is detecting in real time hand gestures. I am thinking maybe there is a way to detect the hand position before the KeyPoints are detected. It could work like that too, since all i have to do is to be able to interact with some augmented reality objects.

Thanks for the help

andrei.toader gravatar imageandrei.toader ( 2013-07-31 23:58:23 -0600 )edit

Question Tools

Stats

Asked: 2013-07-31 13:46:52 -0600

Seen: 625 times

Last updated: Jul 31 '13