Ask Your Question
1

ORB keypoints distribution over an image

asked 2016-04-20 09:12:23 -0600

strann gravatar image

updated 2016-04-20 09:17:56 -0600

Hello,

I'm working on stitching aerial images taken with an UAV. My approach works fine for some nadir datasets, but fails for others. I believe that one of the reasons is that for some images most of the keypoints found by ORB are concentrated in some parts of an image, but not over the entire image. How can I achieve more uniform distribution of keypoints using ORB?

Now I use following parameters:

Ptr<ORB> orb = ORB::create(3000, 1.0, 1, 31, 0, 2, ORB::HARRIS_SCORE, 31, 20);

Input image Obtained keypoints

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
4

answered 2016-04-20 10:16:16 -0600

In the version 2.4 you get a gridAdaptatedFeatureDetector that was sampling point in a grid for you.

In version 3.1, it seems gone, no clue why, but you can reproduce the behavior yourself: split your image using a grid, and detect a reasonable amount of points in each cell. Therefore, you need to adjust the grid size and the features number per cell to help your stitching.

edit flag offensive delete link more

Comments

I believe I saw the algorithm in Stitching pipeline in OpenCV 3.1, but back then I didn't guess why splitting an image into a grid. Thank you!

strann gravatar imagestrann ( 2016-04-20 10:31:17 -0600 )edit

@Mathieu, does it make sense to take into account both features detected in the whole image and features detected in each cell of the grid?

strann gravatar imagestrann ( 2016-04-20 10:38:54 -0600 )edit

I'm not sure of what you mean: you don't perform (directly) the whole image detection. But keep in mind that the matching must consider the image as a whole, not only cell by cell.

Mathieu Barnachon gravatar imageMathieu Barnachon ( 2016-04-20 15:28:37 -0600 )edit

Yes, I understand how to do feature matching, thank you. I mean I could first perform directly the whole image detection and after that split the image into a grid and perform detection cell-wise. So I would have like global and local features?

strann gravatar imagestrann ( 2016-04-21 02:00:58 -0600 )edit
5

answered 2016-04-21 07:30:29 -0600

What you want is an algorithm called adaptive non-maximal suppression. This will achieve an even distribution of those features which are locally "responsive". This is my implementation guided by the paper "Multi-Image Matching using Multi-Scale Oriented Patches" by Brown, Szeliski, and Winder.

   void adaptiveNonMaximalSuppresion( std::vector<cv::KeyPoint>& keypoints,
                                       const int numToKeep )
    {
      if( keypoints.size() < numToKeep ) { return; }

      //
      // Sort by response
      //
      std::sort( keypoints.begin(), keypoints.end(),
                 [&]( const cv::KeyPoint& lhs, const cv::KeyPoint& rhs )
                 {
                   return lhs.response > rhs.response;
                 } );

      std::vector<cv::KeyPoint> anmsPts;

      std::vector<double> radii;
      radii.resize( keypoints.size() );
      std::vector<double> radiiSorted;
      radiiSorted.resize( keypoints.size() );

      const float robustCoeff = 1.11; // see paper

      for( int i = 0; i < keypoints.size(); ++i )
      {
        const float response = keypoints[i].response * robustCoeff;
        double radius = std::numeric_limits<double>::max();
        for( int j = 0; j < i && keypoints[j].response > response; ++j )
        {
          radius = std::min( radius, cv::norm( keypoints[i].pt - keypoints[j].pt ) );
        }
        radii[i]       = radius;
        radiiSorted[i] = radius;
      }

      std::sort( radiiSorted.begin(), radiiSorted.end(),
                 [&]( const double& lhs, const double& rhs )
                 {
                   return lhs > rhs;
                 } );

      const double decisionRadius = radiiSorted[numToKeep];
      for( int i = 0; i < radii.size(); ++i )
      {
        if( radii[i] >= decisionRadius )
        {
          anmsPts.push_back( keypoints[i] );
        }
      }

      anmsPts.swap( keypoints );
    }
edit flag offensive delete link more

Comments

Sorry, I tried to change this code to Java, but I can't understand the sort function? and swap function? numeric_limits? and keypoints[i].pt - keypoints[j].pt in Java can't substract points. Can you help me?

gi gravatar imagegi ( 2016-08-03 12:18:41 -0600 )edit

This shouldn't be too difficult to convert to Java.

The way std::sort is used above is to sort the keypoints such that they are ordered most responsive to least responsive. Java must have some built in sorting algorithms, try to use one of those as doing this yourself is probably a waste of time as given how common the operation is, it must already exist.

The subtraction of the keypoints subsequently passed to cv::norm simply gives the (+) distance between the two points, do this however you can.

std::swap was my way to reassign without copying. std::move would have worked as well. Since you have already chosen to use Java, efficiency isn't of primary concern, so just make sure that the vector/array passed in is the one that gets modified or return the new vector/array.

Der Luftmensch gravatar imageDer Luftmensch ( 2016-08-09 18:42:16 -0600 )edit

First of all thank you for the sharing. Three question: This implementation works as the original one that is available in their repository? Have you tested against it? After the execution, keypoints I suppose is an ordered vector, right? Thank you in advance!

HYPEREGO gravatar imageHYPEREGO ( 2019-04-05 10:46:30 -0600 )edit
2

The algorithm in the answer above should be similar to the paper also listed in the answer (from 2016). The SSC method you linked to is from 2018 (Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution). So, it looks to me like the repo you linked should give better results, faster and more spread out. This is the same paper linked to by Dali below. The algorithm above is called "Brown" in the SSC paper. Here is the link to the c++ SSC in that repo.

Der Luftmensch gravatar imageDer Luftmensch ( 2019-04-05 14:10:17 -0600 )edit

Thank you for your reply, it clarify everything! I hope that since bucketing was removed in OpenCV 3.x they're going to add such keypoints selection in the future.

HYPEREGO gravatar imageHYPEREGO ( 2019-04-08 03:32:11 -0600 )edit

I've already copied, modified and ran the SSC code, just go ahead and use it, no waiting for OpenCV, or do a pull request and add it.

Der Luftmensch gravatar imageDer Luftmensch ( 2019-04-08 07:19:36 -0600 )edit

Yes sure I've already done that, it just be nice to have such keypoints selection by default, maybe next months I'll do a pull request. By the way, again, thank you a lot!

HYPEREGO gravatar imageHYPEREGO ( 2019-04-08 08:00:29 -0600 )edit

@gi Java implementation is coming to the SSC algorithm. Currently under PR review.

Dail gravatar imageDail ( 2020-07-24 10:48:34 -0600 )edit
4

answered 2018-05-10 02:05:17 -0600

Dail gravatar image

updated 2020-07-24 10:46:51 -0600

There is a recent paper that tackles the problem of homogeneous keypoint distribution on the image. C++, Python, Java, and Matlab interfaces are provided in this repository.

edit flag offensive delete link more

Comments

1

I've tried this method and it works great and is super fast. Highly recommended.

Der Luftmensch gravatar imageDer Luftmensch ( 2020-03-20 10:19:18 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-04-20 09:12:23 -0600

Seen: 6,071 times

Last updated: Jul 24 '20