I have a set of points in an image (vector<Point2f>
) and I want to calculate their descriptors. I want to use the ORB descriptor for this. It is possible to calculate the descriptors for any set of keypoints (vector<KeyPoint>
), but they have additional information like size and rotation (the BRIEF descriptor in ORB is very sensitive to rotation, so it's not appropriate just use default keypoint parameters). The keypoint detector finds this additional information, but I'm curious if there is a way to do what it does when the point coordinate is already known.
The ORB keypoint detector roughly does these steps:
- FAST keypoint detection (with image pyramids)
- Calculate the Harris corner response (aka corner measure, corner score, corner criterion etc.)
- Calculate mass center and inertia in the keypoint's neighborhood to find the rotation
Then the compute-function calculates the rotation-compensated BRIEF descriptor.
In short I wish to circumvent step 1. in the ORB detector. The image pyramid might complicate things, but for my use-case, loosing scale invariance is a reasonable compromise.