# Does the matching work in OpenCV 2.4.4?

The Reason I ask this question is the following: when I match both my descriptors that are of the size 64x241 and 64x258 with matcher.match(descriptorsa,descriptorsb,matches); the matches contain 1x258 matches... how is this possible when the descriptors in the first image only contain 241 descriptors?

EDIT:

These are the matches after I did the matcher.match(descriptorsa,descriptorsb,matches); function. Matches on the original picture

The code is the following:

            //matcher
MatOfDMatch matches = new MatOfDMatch();
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
matcher.match(descriptorsDB,descriptorsCamera,matches);
Log.d("LOG!", "Aantal goeie matches na de .match= " + matches.size());
MatOfDMatch goedematches = new MatOfDMatch();

double max_dist = 0;
double min_dist = 100000;

if (descriptorsDB.cols() == descriptorsCamera.cols())
{
// calculates max and min distance between keypoints
for( int i = 0; i < descriptorsDB.rows(); i++ )
{ double dist = matches.toArray()[i].distance;
if( dist < min_dist ) min_dist = dist;

if( dist > max_dist ) max_dist = dist;

}
// tekent enkel goeie matches
for( int i = 0; i < descriptorsDB.rows(); i++ )
{  MatOfDMatch temp = new MatOfDMatch();

if( matches.toArray()[i].distance <= 2*min_dist )
{   temp.fromArray(matches.toArray()[i]);
goedematches.push_back(temp);
}
}

edit retag close merge delete

Sort by » oldest newest most voted

It matches the features of the first image to the best correspondence in the second image. Which means that if your first image has 241 descriptors, you will get 241 matches no matter the second image.

matcher.match(descriptorsa,descriptorsb,matches);


should not return the same number of matches as:

matcher.match(descriptorsb,descriptorsa,matches);


If the first image has 200 descriptors and the second only one, then you will get 200 matches: the best solution for each will be the unique descriptor of the second image!

more

so I should always describe the second image instead of the first to get a better result?

( 2013-05-08 06:40:04 -0500 )edit
1

No. As I said in at least three answers to your posts: you will always get "good matches" (inliers) and "bad matches" (outliers). First of all, you should draw the matches to see if you have good matches or not (but apparently you haven't done this or haven't shared the result to us). I don't know how you can say that this is not working if you haven't even drawn the matches in a comprehensible way...

( 2013-05-08 07:24:46 -0500 )edit

I actually drew the matches already, i will share you the results (check edit on this post). I'm sorry for not sharing them. I hope these images suffice for you, since I wasn't able to draw lines between the two images. These are the matches i get from the matcher.match(descriptorsa,descriptorsb,matches); function (not filtered). I hope those are the ones you want to see.

( 2013-05-08 12:24:25 -0500 )edit

If you want to draw lines between your images, you will need to copy both of them in the same matrix before. I don't see why your results would be incorrect. The matches you get from the original image to the similar one are probably really coherent when those with the other images aren't (but you would need to draw lines to see this). As I said, the algorithm gives the _best possible match w.r.t the hamming distance_ even though it is really bad. Your filtering step is also very naive, and you should prefer a geometry validation using e.g. a homography matrix. But the first step would be to draw line to convince yourself that the algorithms work.

( 2013-05-14 01:58:16 -0500 )edit

I do not agree completely on @NightLife 's comment. Offcourse more matches can happen for a single point, that is why people use algorithms like RANSAC to counter that.

However, what happends here is that the matching functionality is a bruteforce matcher, which means that it matches both ways between the data sets. Therefore the result must contain the amounts of elements as the largest descriptor set. Else the matcher will try to access elements in a descriptor set that are out of the actual range. This is the reason of your 258 value, which is completely correct and how the algorithm should work in order not to create errors.

more

When I use the matches I get the following results TLK = 258 matches (when i match the same image) TLK2 = 61 matches (when i match a similar image) TLK3 = 44 matches (has nothing to do with the original image) TLK4 = 251 matches (has NOTHING to do with the original image, gives back a lot of matches) TLK5 = 258 matches (same here, has NOTHING to do with the original, also gives back a full matching)... what is going on? :S I have the feeling the matches has no clue on what it is doing.

( 2013-05-08 06:11:12 -0500 )edit

Except it does not do a two-ways matching. It matches the first set of descriptors to the second. In other words: matching A to B won't give the exact same results as matching B to A.

( 2013-05-08 06:37:37 -0500 )edit

I would post NOTHING again ... seriously ... be patient and let us try to find the solution together.

( 2013-05-08 06:43:57 -0500 )edit

@Nyenna : is was expecting it to perform two way matching, but it seems I was wrong. However, your suggestion seems legit. Thanks for the update, learnt something new today =)

( 2013-05-08 06:46:47 -0500 )edit

If you show the matches in one window, you can see that sometimes one feature point from first or second image is matched to more than one point in the other image. So, you will have more matches than 241.

more

Official site

GitHub

Wiki

Documentation