# Which matcher is best for SURF?

I am developing an android image recognition application and I had used SURF algorithm as detector and descriptor.There are many matchers like flannbased, bruteforce_hamming, bruteforce_hamminglut, bruteforce_sl2 and bruteforce_l1.Can I know which matcher is best for SURF.

edit retag close merge delete

Sort by ยป oldest newest most voted

In my opinion, it is FLANN-Based matcher however to get even better results you can filter the best matches accordingly. For example:

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{
if( matches[i].distance < 2*min_dist )
{
good_matches.push_back( matches[i]);
}
}
for( int i = 0; i < good_matches.size(); i++ )
{
printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
}

more

imran, could you please explain what max and min distance represent? How does this filtering work?

( 2012-11-08 09:31:19 -0500 )edit

with flann based matching my code breaks down. Is it maybe because the images are not same size?

( 2012-11-11 05:48:13 -0500 )edit

@dilgenter max and min distance is the greatest and least distance from the a keypoint in one image that matched a keypoint in another image, respectively. So basically it in the code it refers to the matched keypoints only. The filtering works by only selecting those matched keypoints whose distance is less than 2 x the least distance.

( 2012-11-13 15:50:11 -0500 )edit

@Szippy image sizes should not be a problem. There must be an other reason why it breaks down. Give me your email address and I'll send you some sample C++ code that works.

( 2012-11-13 15:52:26 -0500 )edit

ok. My email address is szigeti.peter5@gmail.com, in the mean time I fixed the code, but am having trouble porting it to andtoid. I am using the ndk to acheive this.

( 2012-11-14 10:30:58 -0500 )edit

Hi,

We experimented various matchers with SURF.

FLANN is fast but... gives low performances in difficult context (heterogeneous/various) dataset.

Brute force matchers like L1 or L2 based distance give good results.

If you consider L2 based bruteforce matchers, consider L2 distance without the square root computation which does not introdue error in this matching case and allows less processing to be performed.

Typical use :

//Allocate your image descriptor and your matcher with a OpenCV pointer (do not care about the object delete step):

//-> 1. descriptor:

cv::Ptr<cv::DescriptorExtractor> _descExtractor = DescriptorExtractor_Custom::create("SURF");


/*-> 2. matcher: define a string keyword that shows which matcher to choose :

*BruteForce (it uses real L2 )

*BruteForce-SL2 (not in the documentation, BUT this is the one that skeeps the squared root !)

*BruteForce-L1

*BruteForce-Hamming

*BruteForce-Hamming(2)

*FlannBased

*/

cv::Ptr<cv::DescriptorMatcher> _descMatcher = cv::DescriptorMatcher::create(keyword );


Finally, regarding good matches sorting, you should take a look at the RANSAC method that allows global a displacement identification and not corresponding matches pop out. Have a nice coding ;o)

more

Hello I am alsow working on an android project for image recognition of a set of images from sd card to the camera indicated ones. Have you completed youre work? Could you help me with a few questions?

My tought process is like this: 1: load all images from sd card, calculate their keypoints, then theirs descriptions in Mat, and store them in a List<mat> data container; 2: when the camera is initialized I get one image, turn it to grayscale, get their keyPoints, the descriptions, and do a Brute Force matching to he descriptor just calculated with a loop taking all previously loaded descriptors.

Then there is the MatOfDMath result, and there I have some issues. I am trying to get the good_matches List, but all the distances are 0 in the match result. I am saving the largest best_matches container to be the one witch is my image from the loaded ones, but it is not working. Any ideas?

more

@Szippy, I suggest you create a new question

( 2012-11-13 15:53:54 -0500 )edit

@Szippy , do you solve your problem ?

( 2013-04-16 04:41:18 -0500 )edit

yes I have.

( 2013-08-10 02:49:53 -0500 )edit

Official site

GitHub

Wiki

Documentation