Passing ORB Features to calcOpticalFlowPyrLK

asked 2015-12-30 12:58:24 -0600

procoding gravatar image

updated 2015-12-30 12:59:13 -0600

I am doing a project where I need to be able to track different keypoints found using ORB in a video. I understand that generally, Shi-Tomasi points returned by "goodFeaturesToTrack" are used, but I am doing this for an image stitching project and thus need to be able to use the useful descriptor information that goes along with each ORB keypoint. I have seen a similar article on the subject, but there does not seem to be a solution that was reached. My current method is to make an array of tuples of the coordinates taken from each feature I detect. I am worried that this is not the correct format, however, because I keep getting this error at the calcOpticalFlowPyrLK line:

TypeError: prevPts is not a numpy array, neither a scalar

I took some Shi-Tomasi points out of the image using goodFeaturesToTrack and printed them out to the console. For some reason, only one came up, and it was formatted like so:

[[[ 2976.   332.]]]

Here is a snippet of what the ORB feature array looks like:

[(2228.739013671875, 1203.9490966796875), (2898.794189453125, 1092.8704833984375), (3060.037353515625, 852.7973022460938), (3217.697265625, 150.49363708496094), (372.6509094238281,157.66000366210938), (3120.951416015625, 1519.2691650390625)]

So my array is composed of 2-valued tuples inside of an array. What exactly is the format of the Shi-Tomasi points (the extra set of brackets seems redundant), and how would I convert my current array of tuples to that form if I need to?

edit retag flag offensive close merge delete

Comments

I don't know how it works in python but in C++ one of ORB result is a vector<keypoint> and calcOpticalFlowPyrLK need vector<point2f>. In C++ I have to do something like this :

Ptr<Feature2D> b = ORB::create();

b->detectAndCompute(img1, Mat(),keyImg1, descImg1,false);
status.resize(keyImg1.size());
vector<cv::Point2f> pts1(keyImg1.size()),pts2;

for (int i=0;i<keyImg1.size();i++)
    pts1[i] = keyImg1[i].pt;
calcOpticalFlowPyrLK(img1, img2, pts1, pts2, status, err);
LBerger gravatar imageLBerger ( 2015-12-30 14:10:02 -0600 )edit

My conversion goes like this:

for point in prevPnts:                                                                           
      prevCoords.append(point.pt)

prevPnts are the keypoints and prevCoords is a Python list of the coordinates of each keypoint. Should I not be using a python list to store the coordinates? After all the error message did say:

TypeError: prevPts is not a numpy array

What, then, should I use?

procoding gravatar imageprocoding ( 2015-12-30 14:23:41 -0600 )edit

There is a c++ function cv::keypoint::convert() that takes a vector of Keypoints and puts out a vector of Point2f, or vice versa, depending on the order of the arguments. Not sure of the Python syntax, but that's what you need to do.

Tetragramm gravatar imageTetragramm ( 2015-12-30 14:25:55 -0600 )edit

I don't understand python but you can find python example here or may be here

PS in c++ it's not a list but a vector (array). But it is C++ and not pyhon.

LBerger gravatar imageLBerger ( 2015-12-30 14:31:13 -0600 )edit

Oh and the second part of your question, what is the format of the points? It's a Keypoint class object, described here; http://docs.opencv.org/3.1.0/d2/d29/classcv_1_1KeyPoint.html#acfcc8e0dd1a634a7583686e18d372237&gsc.tab=0

Tetragramm gravatar imageTetragramm ( 2015-12-30 16:45:59 -0600 )edit