Ask Your Question
0

How are SiftDescriptorExtractor and SiftFeatureDetector implemented?

asked 2018-05-20 07:27:19 -0600

The SIFT is said to be scale, rotation and viewpoint invariant. But when does it gain this characteristic? If I were to detect keypoints with the FastFeatureDetector and get descriptors with the SiftDescriptorExtractor, would the descriptors be the same for the same keypoint seen from a different viewpoint?

image description

If I convert cv::KeyPoint into cv::Point2f to transform the keypoints into other viewpoint and then convert the transformed cv::Point2f it back to cv::KeyPoint, would the information in the cv::KeyPoint be the same, as if the keypoint at the same place was detected by SiftFeatureDetector?

image description

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-05-22 06:45:48 -0600

Grillteller gravatar image

But when does it gain this characteristic? If I were to detect keypoints with the FastFeatureDetector and get descriptors with the SiftDescriptorExtractor, would the descriptors be the same for the same keypoint seen from a different viewpoint?

Yes, the descriptors should be the same (or almost the same) for the same point seen from a different view. Outliers are still existent, which you have to filter later! The descriptors are therefor generated only on very significant points (local extrema in a pyramid of difference of gaussians). The classical SIFT descriptors is formed by a vector containing the values of orientation histogram entries. It is very well explained in the original paper by Lowe and on diverse websites (even on wikipedia: https://en.wikipedia.org/wiki/Scale-i...)

If I convert cv::KeyPoint into cv::Point2f to transform the keypoints into other viewpoint and then convert the transformed cv::Point2f it back to cv::KeyPoint, would the information in the cv::KeyPoint be the same, as if the keypoint at the same place was detected by SiftFeatureDetector?

If you convert KeyPoint to Point2f some information gets lost. cv::Keypointhas additional attributes. These are angle, class_id, octave, response, and size, which are useful for matching (https://docs.opencv.org/3.2.0/d2/d29/...). They are thrown away when converting to Point2f with the method cv::Keypoint::convert but still, for me sometimes it was necessary. You can try to keep these values in different vectors.

edit flag offensive delete link more

Comments

@Grillteller So if I keep those values (angle, size, octave) in a vector and transform the x,y coordinates, will those stored information still be of any use for those transformed keypoints? Wouldn't the angle and size of the feature potentially change during the transformation?

kumpakri gravatar imagekumpakri ( 2018-05-22 07:17:11 -0600 )edit

Difficult question - it depends on what you are doing. If you transform the point and calculate for the same point a new Keypoint and a descriptor, according to the invariance of the descriptor, the descriptor values should be (almost) the same. But it is still possible that the angle and size of the keypoint vary. It is somehow what you are usually trying to do with feature matching. You have 2 or more images and try to find homologue points in both images even if the images are distorted or radiometrically changed. Homologue keypoints may have different angles and sizes etc... but the descriptor should be the same or close to the same. But since there are often still a lot of outliers, this is not guaranteed and the matches (closest descriptors) have to be filtered.

Grillteller gravatar imageGrillteller ( 2018-05-22 07:43:26 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-05-20 07:27:19 -0600

Seen: 712 times

Last updated: May 22 '18