Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Calculate (ORB) descriptors for arbitrary image points

I have a set of points in an image (vector<Point2f>) and I want to calculate their descriptors. I want to use the ORB descriptor for this. It is possible to calculate the descriptors for any set of keypoints (vector<KeyPoint>), but they have additional information like size and rotation (the BRIEF descriptor in ORB is very sensitive to rotation, so it's not appropriate just use default keypoint parameters). The keypoint detector finds this additional information, but I'm curious if there is a way to do what it does when the point coordinate is already known.

The ORB keypoint detector roughly does these steps:

  1. FAST keypoint detection (with image pyramids)
  2. Calculate the Harris corner response (aka corner measure, corner score, corner criterion etc.)
  3. Calculate mass center and inertia in the keypoint's neighborhood to find the rotation

Then the compute-function calculates the rotation-compensated BRIEF descriptor.

In short I wish to circumvent step 1. in the ORB detector. The image pyramid might complicate things, but for my use-case, loosing scale invariance is a reasonable compromise.

Calculate (ORB) descriptors for arbitrary image points

I have a set of points in an image (vector<Point2f>) and I want to calculate their descriptors. I want to use the ORB descriptor for this. It is possible to calculate the descriptors for any set of keypoints (vector<KeyPoint>), but they have additional information like size and rotation (the BRIEF descriptor in ORB is very sensitive to rotation, so it's not appropriate just use default keypoint parameters). The keypoint detector finds this additional information, but I'm curious if there is a way to do what it does when the point coordinate is already known.

The ORB keypoint detector roughly does these steps:

  1. FAST keypoint detection (with image pyramids)
  2. Calculate the Harris corner response (aka corner measure, corner score, corner criterion etc.)
  3. Calculate mass center and inertia in the keypoint's neighborhood to find the rotation

Then the compute-function calculates the rotation-compensated BRIEF descriptor.

In short I wish to circumvent step 1. in the ORB detector. The image pyramid might complicate things, but for my use-case, loosing scale invariance is a reasonable compromise.

Edit: I just found out that the "size" parameter in a keypoint is the size of the patch around the point which is used to compute the descriptor. For ORB this is 31 (i.e. 31x31 pixels). However, this number increases when the "octave" parameter increases. My understanding is that the value of "octave" is in which layer in the image pyramid the keypoint was detected. The patch, I assume, gets scaled and rotated before the descriptor is computed (and I also assume it gets discretized so that there are 31x31 pixels when computing the descriptor).

Anyhow, since size invariance is neglectable in this use case, "size" and "octave" is always 31.0f and 0. The question then is if there is an OpenCV function for calculating the corner response directly, and one for calculating the angle (I have some vague recollection of the required math to do those things, but I assume that an OpenCV implementation will be more optimized).