Ask Your Question
0

Is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos?

asked 2017-03-21 10:36:01 -0600

yorkhuang gravatar image

Can we conduct pose estimation through pre-taken environmental photos? We want to conduct a mixed reality experiment by taken four photos of a room and texture them onto walls of virtual room to create a virtual environment. While an user wearing a VR headset with a customized front camera to navigating this virtual room, the physical room snapshot is taken by the camera and used to conduct the pose estimation with walls of the virtual room. So, the question is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos? Thanks,

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-03-21 17:44:39 -0600

Tetragramm gravatar image

Yes. Depending on the accuracy of the environmental photos and the 3d reconstruction you make from them, it is possible to get within mm of the correct position. Not sure on the rotation, but it's within the uncertainty of the Vive's tracking.

The difficulty of course, is making a good 3d reconstruction from your environmental photos.

edit flag offensive delete link more

Comments

Hi, Tatragramm, thank you for your confirm. Just as your explanation, my current problem is how to reconstruct a good 3d environment from your environmental photos? Any recommendation? Someone told me that I have to consider the focal length and FOV of the camera when taking the environment photos. Any comment. Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-21 21:08:58 -0600 )edit

The problem is either structure from motion (very hard) if you don't have a tracked camera or a reference point. If you do have a tracked camera then it's triangulation easy. A tracked camera is anything you can locate the pose of wherever it is.

If you have one known landmark (say your poster or an aruco marker) then it's SLAM. Simultaneous Localization and Mapping (medium/hard).

I can point you at some good triangulation code. SfM has an OpenCV module, and I have no idea where to find a good SLAM library. The good algorithms for SLAM have changed so much recently.

Tetragramm gravatar imageTetragramm ( 2017-03-21 21:45:33 -0600 )edit

Thank you, Tatragramm. Is it possible to achieve pose estimation with solvePnP() without SLAM? The scenario is similar to what I explained in my previous post that I set a physical spot as the original of the world coordinate system and build the virtual room a long with photo texture walls accordingly. So, by feature matching camera captured's image with photo textured walls, can I perform pose estimation with solvePnP? Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-21 23:25:35 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-03-21 10:36:01 -0600

Seen: 181 times

Last updated: Mar 21 '17