I've just spent a week exploring the various Feature Detection algorithms in OpenCV 3.0. I've tried: ORB, BRISK, and SURF with various combinations of Descriptor Extractors (ORB, BRIEF, SURF, FREAK). I've experimented with knnMatches, the ratio test, cross-check match validation and fine tuning the maximum distance calculation.
I'm doing this on Android so have not explored the slower algos like SIFT.
What I've found is disappointing: Once I've trained the algo on a given object it can only reliably find the object if it is alone in an image on plain background. Which is pretty useless. If the target image has any additional feature points, which it will on a non-solid background of if other objects are present, then the algos have no reliable way of matching feature points such that the object the algo was trained on can be located. Basically feature points are matched in a semi-random way it seems. I know it is not random, but it is certainly not accurate either.
So I'm curious what success anyone has had using these algorithms in real-world situations. I notice that all the examples in the documentation are very simple and highly controlled.
Perhaps I misunderstand what the point of these feature detectors are. Can they be used to locate objects? If you have used them successfully, I'd like to know what your 'use cases' were and how these algos added value.