Ask Your Question
0

Why the matching is givving different results, or what am I doing wrong?

asked 2015-04-13 08:12:14 -0600

thdrksdfthmn gravatar image

I have an application that is using a database of descriptors from some images (containing the same type of object). The descriptors are stored in a vector of Mat (vector of descriptors). The application is doing a knnMatch between the features' descriptors in the input image (different from the ones used for training and that may not contain the searched object) and each group element in the vector of descriptors, then it counts the matches of each feature. After doing all the matches and counts, it filters the keypoints based on the counters (if the number of matches is greater than a threshold, then the keypoint is kept).

I have done an application that is updating/changing the data base of descriptors and by mistake I have put all the descriptors in one Mat (not in a vector of Mat). The result of matching and and counters has changed. eg: if before I had 100 counts of matching, now I have 2 or 1.

So I am a little bit confused: What exactly is doing the knnMatch, so that if I do a knnMatch 10 times, one time on each of 10 groups of descriptors and one time on the Mat containing all the descriptors in the 10 groups, I get a different number of matches?

Shall I do a train on those 10 groups? If I do so, how may I detect the object based on those trained descriptors? (it is true that is not very fast this approach, but I need a detector that is invariant to rotation/affine transformation and scaling)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2015-04-13 10:16:08 -0600

Eduardo gravatar image

As far I understand your question, on one side you do:

  • for each query keypoints you match them to the k closest for each of the 10 groups (for one query keypoints, there will be k *10 matches)
  • for each query keypoints you match them to the k closest in the merged mat (for one query keypoints, there will be k matches)

Shall I do a train on those 10 groups? If I do so, how may I detect the object based on those trained descriptors? (it is true that is not very fast this approach, but I need a detector that is invariant to rotation/affine transformation and scaling)

The best would be to try the two approaches. For the second option, you can just count the class that is the most matched (you can affect a class id for a train keypoint manually by keeping for each index in the mat the corresponding class id or directly with OpenCV).

edit flag offensive delete link more

Comments

Nice answer, thanks. So using knnMatch will return the best match if k = 1... and the more descriptors I have in the groups, the higher is the probability for a descriptor that is not from the object to match one on the object... Maybe I will study the class id stuff...

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-04-14 04:28:27 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-04-13 08:12:14 -0600

Seen: 135 times

Last updated: Apr 13 '15