Ask Your Question
5

OpenCV DescriptorMatcher matches

asked 2012-07-05 07:03:40 -0600

ropsnou gravatar image

updated 2012-07-10 04:01:45 -0600

Rui Marques gravatar image

I'm trying to match a query image against a set of images following the example 'matching_to_many_images' (from here: https://code.ros.org/trac/opencv/browser/branches/2.3/opencv/samples/cpp/matching_to_many_images.cpp). Now, the example works fine, my question is about the matches that are returned. If I got it right matches always correspond to query descriptors (i.e number of matches always equal number of query descriptors). What if I would like to do it the other way around; return matches corresponding to training descriptors (i.e number of matches equal number of training descriptors) instead, is this possible?

Imagine the following problem: I'm matching image Q against a set of 1000 images. Between image Q and image 40 there are 100 matches. I make a copy of image 40, making image 40 and 41 identical, and the dataset 1001 images. Now if I repeat the matching the results will be 50 matches between Q and 40, and 50 between Q and 41. Not 100 matches for both 40 and 41, which is what I would like to achieve. Does anyone have any idea how to do this?

One "quick-and-dirty" solution is to treat the dataset as query and vice versa, but then you cant make use of indexes etc so it's not a sufficient one.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
4

answered 2012-07-10 03:45:23 -0600

Maria Dimashova gravatar image

updated 2012-07-10 03:49:04 -0600

Hi! Your question is not entirely clear. But I'll try to answer it.

There are two types of match() methods in DescriptorMatcher interface to match query image descriptors with descriptors of (1) one train image and (2) a set of training images (see doc). The matching_to_many_images is about (2). The example of real usage of (2) is textured object detection, where test image descriptors are matched with all train images descriptors in conjunction, then train images (objects) with a large count of matches are found. BTW FlannBasedMatcher is used for a better performance in such case.

In both cases (1) and (2) you will get matches count equal to query image descriptors number because DescriptorMatcher::match() finds for each query descriptor one nearest train descriptor (among all test images for (2)). So your result with 1001 images is not a bug. If you would like to achieve the same count of matches with each of duplicate images, maybe you shoud use (1) interface with each train image independently. But it's not efficiently in comparison with FlannBasedMatcher trained on all training set of descriptors.

To match training set with query descriptors, ie vice versa, you can use the "quick-and-dirty" solution. I don't see insufficiency of indices if you mean indices of DMatch structure. But if you mean FlannIndex, I agree the "quick-and-dirty" solution is not efficient if your training set is really large and query despriptors number is small.

Then again are you sure that you want to match all training set to query descriptors? Eg if you try to implement a "cross-check" filter of matches, you can match only a small subset of training descriptors that are in matches of direct matching (query to train). Or even better you can think about other filter of matches, eg "ratio check", that don't need to match training set to query desriptors.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2012-07-05 07:03:40 -0600

Seen: 3,919 times

Last updated: Jul 10 '12