Ask Your Question
3

Cannot Reproduce Results of Feature Matching with FLANN Tutorial

asked 2018-03-28 08:05:13 -0600

DaleWD gravatar image

I installed OpenCV 3.3.1 on OS 10.11.6 using MacPorts. I copied the code from the tutorial from https://docs.opencv.org/3.1.0/d5/d6f/.... I copied the images from https://github.com/opencv/opencv/blob... (and box). I built and ran the executable, but got poor feature matching that did not look like the image shown on the tutorial page.

Is the tutorial code possibly out of sync with the sample result images, or is there a problem with OpenCV 3.3.1 in MacPorts?

The reason I'm asking is that I was writing my own OpenCV code to perform feature matching, but could not get good performance, even when tuning threshold parameters, trying different matching algorithms, etc.--so I backed up from my code to the tutorial, and discovered I'm still seeing the same poor results. I get unreliable feature matches that look like a lot more like random correspondences.

Can someone verify whether the tutorial is working as advertised? I'm trying to isolate what my problem is.

edit retag flag offensive close merge delete

Comments

2

latest master, win, -- 5 6 good keypoints only, so not working as advertised.

berak gravatar imageberak ( 2018-03-28 08:58:12 -0600 )edit

Okay, that's a lot like what I'm seeing. And my "good" matches aren't all really corresponding features, either.

DaleWD gravatar imageDaleWD ( 2018-03-28 09:31:13 -0600 )edit
1

six ?

-- Max dist : 0.732797
-- Min dist : 0.055168
-- Good Match [0] Keypoint 1: 38  -- Keypoint 2: 116
-- Good Match [1] Keypoint 1: 47  -- Keypoint 2: 77
-- Good Match [2] Keypoint 1: 49  -- Keypoint 2: 77
-- Good Match [3] Keypoint 1: 70  -- Keypoint 2: 310
-- Good Match [4] Keypoint 1: 104  -- Keypoint 2: 356
-- Good Match [5] Keypoint 1: 111  -- Keypoint 2: 335
LBerger gravatar imageLBerger ( 2018-03-28 09:31:50 -0600 )edit
1

@LBerger, indeed, 6. i can't count on wednesdays, simply.

berak gravatar imageberak ( 2018-03-28 09:34:29 -0600 )edit

loop is wrong ?

 for( int i = 0; i < descriptors_1.rows; i++ )
  { double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

it should be for( int i = 0; i < matches.size(); i++ ) ? may be I'm tired

LBerger gravatar imageLBerger ( 2018-03-28 09:38:11 -0600 )edit

yes, probably. , though it's the same number. (since img1 is the small one, so less kp in img1 and the number of matches is min(kp1, kp2))

it might not even find a match for each original keypoint.

berak gravatar imageberak ( 2018-03-28 10:00:14 -0600 )edit
1

May be but don't change FlannBasedMatcher in BFMatcher matcher(detector->defaultNorm(),true);

LBerger gravatar imageLBerger ( 2018-03-28 10:17:30 -0600 )edit
2

@DaleWD may be you should post an issue and give a link to this post

LBerger gravatar imageLBerger ( 2018-03-28 10:42:29 -0600 )edit
1

funny, the python version gives better results (more matches)

is it about the default values(params) of the matcher ?

berak gravatar imageberak ( 2018-03-28 11:30:59 -0600 )edit

It's really weird. I tried SIFT and SURF, I tried the BFMatcher with k-nearest neighbors + Lowe ratio test, tried different SIFT contrast thresholds, tried changing layers per octave, different matchers--I can't seem to make any combination of them work well.

I would post an issue, but I'm at work right now. Thanks for verifying it's not entirely me...

DaleWD gravatar imageDaleWD ( 2018-03-28 11:53:45 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
3

answered 2018-03-28 12:15:50 -0600

Eduardo gravatar image

Don't know what has changed between SURF version used when the tutorial was written and now but I would not rely on the image result since it dates back to 2011.

You should post your query and train images.

Keypoint matching need texture information and will perform very badly with uniform scene. SURF features are not invariant to viewpoint changes. Also, Lowe ratio test should be used for matching. Despite the theoretical rotation / scale invariance, in reality from my experience you will observe a degradation for the feature matching.

With the following code:

  //-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
  int minHessian = 400;
  Ptr<SURF> detector = SURF::create();
  detector->setExtended(true);
  detector->setHessianThreshold(minHessian);
  std::vector<KeyPoint> keypoints_1, keypoints_2;
  Mat descriptors_1, descriptors_2;
  detector->detectAndCompute( img_1, Mat(), keypoints_1, descriptors_1 );
  detector->detectAndCompute( img_2, Mat(), keypoints_2, descriptors_2 );

  //-- Step 2: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector< std::vector<DMatch> > knn_matches;
  matcher.knnMatch( descriptors_1, descriptors_2, knn_matches, 2 );
  std::vector<DMatch> good_matches;
  for (size_t i = 0; i < knn_matches.size(); i++)
  {
    if (knn_matches[i].size() > 1)
    {
      float ratio_dist = knn_matches[i][0].distance / knn_matches[i][1].distance;
      if (ratio_dist < 0.75)
      {
        good_matches.push_back(knn_matches[i][0]);
      }
    }
  }

  //-- Draw only "good" matches
  Mat img_matches;
  drawMatches( img_1, keypoints_1, img_2, keypoints_2,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

Result image (setExtended(false), 64-bits descriptor): image description

Result image (setExtended(true), 128-bits descriptor): image description

edit flag offensive delete link more

Comments

Yes, this looks a lot like the code I originally wrote using SIFT with knn. I will have to try this again tonight when I get home to see what is different in yours compared to mine. Mine did not get more than one or two real matches.

DaleWD gravatar imageDaleWD ( 2018-03-28 12:35:30 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-03-28 08:05:13 -0600

Seen: 1,296 times

Last updated: Mar 28 '18