I am doing a project where I need to be able to track different keypoints found using ORB in a video. I understand that generally, Shi-Tomasi points returned by "goodFeaturesToTrack" are used, but I am doing this for an image stitching project and thus need to be able to use the useful descriptor information that goes along with each ORB keypoint. I have seen a similar article on the subject, but there does not seem to be a solution that was reached. My current method is to make an array of tuples of the coordinates taken from each feature I detect. I am worried that this is not the correct format, however, because I keep getting this error at the calcOpticalFlowPyrLK line:
TypeError: prevPts is not a numpy array, neither a scalar
I took some Shi-Tomasi points out of the image using goodFeaturesToTrack and printed them out to the console. For some reason, only one came up, and it was formatted like so:
[[[ 2976. 332.]]]
Here is a snippet of what the ORB feature array looks like:
[(2228.739013671875, 1203.9490966796875), (2898.794189453125, 1092.8704833984375), (3060.03735
3515625, 852.7973022460938), (3217.697265625, 150.49363708496094), (372.6509094238281, 157.66000366210938), (3120.951416015625, 1519.2691650390625)]
So my array is composed of 2-valued tuples inside of an array. What exactly is the format of the Shi-Tomasi points (the extra set of brackets seems redundant), and how would I convert my current array of tuples to that form if I need to?