Subpixel location of keypoints. Why?

asked 2015-02-13 10:31:27 -0600

Doombot gravatar image

updated 2015-02-13 10:31:49 -0600

When detecting keypoints (such as BRISK, ORB, etc.), I get coordinates with subpixel accuracy (ex.: pt.x = 110.645 , pt.y = 285.432). While I am familiar with the concepts of subpixels, I wonder the location of the keypoint is a float versus an int (rounded up/down) value, such as pt.x=111 and pt.y=285. Ok, I could simply cast the float to an int but that doesn't answer why.

I mean, when the detection algorithm search for a keypoint, it first selects a pixel, then applies various tests in order to determine whether the pixel and the patch around it is really a keypoint according to the established criterion of the method. I know it retrieves the orientation of the keypoint, which might be a float in itself. But even looking at the code or the AGAST or BRISK paper, humbly I don't understand what is the point of using subpixels for the location of the keypoint.

But since it is the way it is implemented in OpenCV (3 for me, but I guess it is the same in 2.4.X), I assume there is a good reason! I might just have misread portions of the paper or missed something in the comments of the code...


edit retag flag offensive close merge delete


Can happen e.g. due to scale, or just because the corner (or edge) is computed with subpixel accuracy. And yes you might want subpixel accuracy, e.g. for other parts of the pipeline, e.g. feature extraction, clustering of the points, etc.

Guanta gravatar imageGuanta ( 2015-02-13 13:08:11 -0600 )edit