Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Pose Estimation for UDT-AR?

Hey guys,

I try to make a (Android) Application, which supports UDT(User Defined Target) Tracking and Augmented Reality. Until now, I adapted FAST/SIFT for the Recognition. But it is just a kind of tracking by detection and does not uses a Pose Estimation until now.

Since the "tracking by detection" Aspect is kinda slow, I want to implement a method, which starts Tracking, after my FAST/SIFT Recognition got a Pose Estimation. Which I want to solve with OpenCV (Because I already use the Native Library)

To my Question now, how can I get the Pose Estimation after I found some matches? I thought, that solvePnPcould do the job, but for that I need 3D Points of the Object and the Camera Matrix (which I dont have!). I know, how I could achieve the Camera Matrix, but how the 3D Points? Since my Tracking is running with a UDT, I dont't know what 3D Points the Target has.

I'm grateful for every Idea and Possibility,

Pose Estimation for UDT-AR?

Hey guys,

I try to make a (Android) Application, which supports UDT(User Defined Target) Tracking and Augmented Reality. Until now, I adapted FAST/SIFT for the Recognition. But it is just a kind of tracking by detection and does not uses a Pose Estimation until now.

Since the "tracking by detection" Aspect is kinda slow, I want to implement a method, which starts Tracking, after my FAST/SIFT Recognition got a Pose Estimation. Which I want to solve with OpenCV (Because I already use the Native Library)

To my Question now, how can I get the Pose Estimation after I found some matches? I thought, that solvePnPcould do the job, but for that I need 3D Points of the Object and the Camera Matrix (which I dont have!). I know, how I could achieve the Camera Matrix, but how the 3D Points? Since my Tracking is running with a UDT, I dont't know what 3D Points the Target has.

I'm grateful for every Idea and Possibility,Possibility.

Edit To grant a little bit more input, After the detection of matches in the Camera Frame I remove Outliers and find the Homography between the Frame and my Reference Image (UDT) and calculate the Perspective Transformation. Bot via findHomography() and perspectiveTransform

Pose Estimation for UDT-AR?

Hey guys,

I try to make a (Android) Application, which supports UDT(User Defined Target) Tracking and Augmented Reality. Until now, I adapted FAST/SIFT for the Recognition. But it is just a kind of tracking by detection and does not uses a Pose Estimation until now.

Since the "tracking by detection" Aspect is kinda slow, I want to implement a method, which starts Tracking, after my FAST/SIFT Recognition got a Pose Estimation. Which I want to solve with OpenCV (Because I already use the Native Library)

To my Question now, how can I get the Pose Estimation after I found some matches? I thought, that solvePnPcould do the job, but for that I need 3D Points of the Object and the Camera Matrix (which I dont have!). I know, how I could achieve the Camera Matrix, but how the 3D Points? Since my Tracking is running with a UDT, I dont't know what 3D Points the Target has.

I'm grateful for every Idea and Possibility.

Edit To grant a little bit more input, After the detection of matches in the Camera Frame I remove Outliers and find the Homography between the Frame and my Reference Image (UDT) and calculate the Perspective Transformation. Bot via findHomography() and perspectiveTransform

Update: I looked a little through other Applications and Examples and found another Solution, where the Location of the points are calculated with the KLT-Algorithm. AFAIK this algorithm is Implemented in openCV's calcOpticalFlowPyrLK so is it possible to solve it that way?

Pose Estimation for UDT-AR?

Hey guys,

I try to make a (Android) Application, which supports UDT(User Defined Target) Tracking and Augmented Reality. Until now, I adapted FAST/SIFT for the Recognition. But it is just a kind of tracking by detection and does not uses a Pose Estimation until now.

Since the "tracking by detection" Aspect is kinda slow, I want to implement a method, which starts Tracking, after my FAST/SIFT Recognition got a Pose Estimation. Which I want to solve with OpenCV (Because I already use the Native Library)

To my Question now, how can I get the Pose Estimation after I found some matches? I thought, that solvePnPcould do the job, but for that I need 3D Points of the Object and the Camera Matrix (which I dont have!). I know, how I could achieve the Camera Matrix, but how the 3D Points? Since my Tracking is running with a UDT, I dont't know what 3D Points the Target has.

I'm grateful for every Idea and Possibility.

Edit To grant a little bit more input, After the detection of matches in the Camera Frame I remove Outliers and find the Homography between the Frame and my Reference Image (UDT) and calculate the Perspective Transformation. Bot via findHomography() and perspectiveTransform

Update: I looked a little through other Applications and Examples and found another Solution, where the Location of the points are calculated with the KLT-Algorithm. AFAIK this algorithm is Implemented in openCV's calcOpticalFlowPyrLK so is it possible to solve it that way?way?

Furthermore if it is possible to use this algorithm, what Images should inserted there? is prevImgmy previous detected Frame? and nextImg the current Frame?