Hey guys,
I try to make a (Android) Application, which supports UDT(User Defined Target) Tracking and Augmented Reality. Until now, I adapted FAST/SIFT for the Recognition. But it is just a kind of tracking by detection and does not uses a Pose Estimation until now.
Since the "tracking by detection" Aspect is kinda slow, I want to implement a method, which starts Tracking, after my FAST/SIFT Recognition got a Pose Estimation. Which I want to solve with OpenCV (Because I already use the Native Library)
To my Question now, how can I get the Pose Estimation after I found some matches? I thought, that solvePnP
could do the job, but for that I need 3D Points of the Object and the Camera Matrix (which I dont have!).
I know, how I could achieve the Camera Matrix, but how the 3D Points? Since my Tracking is running with a UDT, I dont't know what 3D Points the Target has.
I'm grateful for every Idea and Possibility,