Why the matching is givving different results, or what am I doing wrong?
I have an application that is using a database of descriptors from some images (containing the same type of object). The descriptors are stored in a vector of Mat (vector of descriptors). The application is doing a knnMatch between the features' descriptors in the input image (different from the ones used for training and that may not contain the searched object) and each group element in the vector of descriptors, then it counts the matches of each feature. After doing all the matches and counts, it filters the keypoints based on the counters (if the number of matches is greater than a threshold, then the keypoint is kept).
I have done an application that is updating/changing the data base of descriptors and by mistake I have put all the descriptors in one Mat (not in a vector of Mat). The result of matching and and counters has changed. eg: if before I had 100 counts of matching, now I have 2 or 1.
So I am a little bit confused: What exactly is doing the knnMatch, so that if I do a knnMatch 10 times, one time on each of 10 groups of descriptors and one time on the Mat containing all the descriptors in the 10 groups, I get a different number of matches?
Shall I do a train on those 10 groups? If I do so, how may I detect the object based on those trained descriptors? (it is true that is not very fast this approach, but I need a detector that is invariant to rotation/affine transformation and scaling)