2016-05-15 12:42:59 -0600 | received badge | ● Editor (source) |
2016-05-15 12:42:19 -0600 | asked a question | Get object location from optical flow In the first streaming camera frame (let it be frame A) I have found the location of a marker and performed solvePNP, so I can find the 3D pose of the marker. That works fine. For the next camera frames I want to find the new 3D marker pose using optical flow. I have gone through the process of detecting salient features for frame A and B, and computed the fundamental matrix and the essential matrix. But how do I use these projections to find the new 3D pose of the marker? Here is the code so far: I tried to put the new P1 matrix values in the new 3D pose matrix, but that gave me weird results. How can I get the 3D pose of the next frame based on optical flow? |
2016-04-26 02:58:39 -0600 | received badge | ● Enthusiast |
2016-04-22 07:14:04 -0600 | received badge | ● Scholar (source) |
2016-04-22 07:14:03 -0600 | commented answer | Extended 3D tracking with ORB Awesome, thank you very much! I'll have a look at your samples. :) |
2016-04-21 06:17:40 -0600 | asked a question | Extended 3D tracking with ORB I am developing a fairly simple mobile AR application, which uses a basic hamming code marker detection. Now I want to take that a step further, and create a way to continue 3D tracking of the scene based on the initial 3D position of the marker. I want to create a feature similar to the Extended tracking option in Vuforia SDK (but since I have to use open source options, Vuforia won't work here). So what are the steps I have to take to achieve this? I understand that I need some kind of feature detection algorithm - for this I think ORB is a good option, right? After I've found the feature points in the scene, how can I detect if these points are on the same 3D planar surface as the marker? |