Hey,
So I have been working on a tracker based on SIFT descriptor. I saw this example of flannMatcher. http://docs.opencv.org/2.4.2/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
I tried to implement it and worked really well.
Now, my question is how does drawMatches work? As it is mentioned here,
http://docs.opencv.org/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html?highlight=drawmatch#void%20drawMatches%28const%20Mat&%20img1,%20const%20vector%3CKeyPoint%3E&%20keypoints1,%20const%20Mat&%20img2,%20const%20vector%3CKeyPoint%3E&%20keypoints2,%20const%20vector%3CDMatch%3E&%20matches1to2,%20Mat&%20outImg,%20const%20Scalar&%20matchColor,%20const%20Scalar&%20singlePointColor,%20const%20vector%3Cchar%3E&%20matchesMask,%20int%20flags%29
matches1to2 is a vector of DMatch and it matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]]. Now, how does this work?
DMatch has
float distance;
int queryIdx; // query descriptor index
int trainIdx; // train descriptor index
int imgIdx; // train image index
So, using these parameters, how can I (manually) check if keypoints1[i] has a corresponding point in keypoints2[matches[i]] ?