Ask Your Question

Revision history [back]

Evaluation of binary descriptors


I'm using the project (EXAMPLE) detector_descriptor_matcher_evaluation to make a comparison of the performance of binary descriptors.

The results that I get basically show that in cases of blur, exposure and jpeg compression SIFT > BRIEF > ORB > BRISK > FREAK > SURF and in viewpoint changes and rotation+zoom SIFR>FREAK>BRIEF>BRISK>SURF>ORB where ">" denotes "outperforms".

Those results are a bit contradicting of my intuition. ORB is an "improved" version of BRIEF where learned pairs and rotation invariance is added, so how come BRIEF outperforms ORB?

In addition, I'm getting very bad results for the "zoom and rotation tests" (bark and boat datasets") – lower than 2% precision for all the image pairs other than (im1,im2). Should I expect such low precision?

Thanks in advance,