I'm currently testing some Feature Detectors and Descriptors on some images.
I'm matching an image with the rotated version of itself and had a higher percentage of correct matches using Fast Feature Detector and SIFT Feature Descriptor, than using SIFT Feature Detector and Descriptor.
The keypoints found by the FAST Feature Detector don't have any information for the orientation of the keypoint.
So how does the SIFT Feature Descriptor work with those keypoints? Does it just use 0 degrees as orientation or does it compute the orientation itself?