Is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos?
Can we conduct pose estimation through pre-taken environmental photos? We want to conduct a mixed reality experiment by taken four photos of a room and texture them onto walls of virtual room to create a virtual environment. While an user wearing a VR headset with a customized front camera to navigating this virtual room, the physical room snapshot is taken by the camera and used to conduct the pose estimation with walls of the virtual room. So, the question is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos? Thanks,