For rotated and scale dimages I just achieve good matching results using FREAK when adjusting the patternScale parameter and downsampled the traingin image like this:
pyrDown(training, training);
FREAK extractor(true, true, 40, 4);
extractor.compute( training, keypointsA, descriptorsA );
extractor.compute( img, keypointsB, descriptorsB );
Which is the relation of the patternScale parameter and the size of the training image? Is there a way to tune it for all cases of rotations and scalings?