Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Binary Decriptors in Feature Matching

I just wanted to learn about the feature detectors, descriptors and matchers.

I was clear with the detectors and descriptors after my research work and came to know that descriptors are used to describe the detectors found in an image. The descriptors need be rotation, orientation and scale invariant. Every descriptors has a corresponding detector but not the vice versa as every feature can't be described using a descriptor. A clear explanation on this topic is described in documents and I have read it.

Here is my doubt,

After reading this tutorial on Binary Descriptors

I got some idea on what it is. In short

1) 512 Pairs of a patch in Image A

2) Compare the intensity of each pair with the 1st value and with the 2nd value in the pair. If 1st value is higher place 1 or else place 0.

3) We will now have about 512 binary digits composing of 1's and 0's. Let it be


4) Same repeat the above 3 steps on a patch with different image 'B' and also we have 512 binary digits. Let's say it be


5) Now perform the hamming distance between those two binary strings (XOR operation)

6) The result after performing the hamming distance is


Here are my questions?

1) What happens after this step?

2) How the lines are drawn from one image to another image. I came to know that we are using some distance matching something like that.

Feature Marching

I Just wanted to learn in practical what is applied in the image in order to draw the lines in between the descriptors (descriptors matchers - how it is worked).

Seeking for a help on this topic.