Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

this highly unlikely, imho no such code exists in the opencv code base. you'll have to do with the algorithm description from

Rublee, Ethan, et al. “ORB: an efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.‏

4.3. Learning Good Binary Features To recover from the loss of variance in steered BRIEF, and to reduce correlation among the binary tests, we de- velop a learning method for choosing a good subset of bi- nary tests. One possible strategy is to use PCA or some other dimensionality-reduction method, and starting from a large set of binary tests, identify 256 new features that have high variance and are uncorrelated over a large training set. However, since the new features are composed from a larger number of binary tests, they would be less efficient to com- pute than steered BRIEF. Instead, we search among all pos- sible binary tests to find ones that both have high variance (and means close to 0.5), as well as being uncorrelated. The method is as follows. We first setup a training set of some 300k keypoints, drawn from images in the PASCAL 2006 set [8]. We also enumerate all possible binary tests drawn from a 31 ×31 pixel patch. Each test is a pair of 5×5 sub-windows of the patch. If we note the width of our patch as wp= 31 and the width of the test sub-window as wt= 5, then we have N= (wp−wt)2possible sub-windows. We would like to select pairs of two from these, so we have (N 2) binary tests. We eliminate tests that overlap, so we end up with M= 205590 possible tests. The algorithm is:

  1. Run each test against all training patches.
  2. Order the tests by their distance from a mean of 0.5, forming the vector T.
  3. Greedy search: (a) Put the first test into the result vector R and re- move it from T. (b) Take the next test from T, and compare it against all tests in R. If its absolute correlation is greater than a threshold, discard it; else add it to R. (c) Repeat the previous step until there are 256 tests in R. If there are fewer than 256, raise the thresh- old and try again.

This algorithm is a greedy search for a set of uncorrelated tests with means near 0.5. The result is called rBRIEF. rBRIEF has significant improvement in the variance and correlation over steered BRIEF (see Figure 4). The eigen- values of PCA are higher, and they fall off much less quickly. It is interesting to see the high-variance binary tests produced by the algorithm (Figure 6). There is a very pro- nounced vertical trend in the unlearned tests (left image), which are highly correlated; the learned tests show better diversity and lower correlation.

this highly unlikely, imho no such code exists in the opencv code base. you'll have to do with the algorithm description from

Rublee, Ethan, et al. “ORB: an efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.‏

4.3. Learning Good Binary Features Features

To recover from the loss of variance in steered BRIEF, and to reduce correlation among the binary tests, we de- velop a learning method for choosing a good subset of bi- nary tests. One possible strategy is to use PCA or some other dimensionality-reduction method, and starting from a large set of binary tests, identify 256 new features that have high variance and are uncorrelated over a large training set. set.

However, since the new features are composed from a larger number of binary tests, they would be less efficient to com- pute than steered BRIEF. Instead, we search among all pos- sible binary tests to find ones that both have high variance (and means close to 0.5), as well as being uncorrelated. uncorrelated.

The method is as follows. We first setup a training set of some 300k keypoints, drawn from images in the PASCAL 2006 set [8]. We also enumerate all possible binary tests drawn from a 31 ×31 31×31 pixel patch.

Each test is a pair of 5×5 sub-windows of the patch. If we note the width of our patch as wp= 31 wp=31 and the width of the test sub-window as wt= 5, then we have N= (wp−wt)2possible (wp−wt)^2 possible sub-windows. We would like to select pairs of two from these, so we have (N 2) binary tests.

We eliminate tests that overlap, so we end up with M= 205590 possible tests. The algorithm is:

  1. Run each test against all training patches.
  2. Order the tests by their distance from a mean of 0.5, forming the vector T.
  3. Greedy search: (a) search:

    a. Put the first test into the result vector R and re- move it from T. (b) T.

    b. Take the next test from T, and compare it against all tests in R. If its absolute correlation is greater than a threshold, discard it; else add it to R. (c) R.

    c. Repeat the previous step until there are 256 tests in R. If there are fewer than 256, raise the thresh- old and try again.

This algorithm is a greedy search for a set of uncorrelated tests with means near 0.5. The result is called rBRIEF. rBRIEF has significant improvement in the variance and correlation over steered BRIEF (see Figure 4). The eigen- values of PCA are higher, and they fall off much less quickly. It is interesting to see the high-variance binary tests produced by the algorithm (Figure 6). There is a very pro- nounced vertical trend in the unlearned tests (left image), which are highly correlated; the learned tests show better diversity and lower correlation.

rBRIEF.