Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

if you had 200 SURF descriptors in your vocabulary, your bowDescriptor will have 200 numbers, each the distance from the SURF descriptor of your image to one of the features in the vocabulary.

if you had 200 SURF descriptors in your vocabulary, your bowDescriptor will have 200 numbers, each the distance from the SURF descriptor of your image to one of the features in the vocabulary.

to compare 2 images this way, you extract the bowFeatures for both images, and compare those (instead of comparing the images themselves, or the SURF descriptors)

if you had 200 SURF descriptors in your vocabulary, your bowDescriptor will have 200 numbers, each the distance from the SURF descriptor of your image to one of the features in the vocabulary.

to compare 2 images this way, you extract the bowFeatures for both images, and compare those (instead of comparing the images themselves, or the SURF descriptors)

the most simple algorithm for this is nearest-neighbour search.

given, you have a vector of bowFeatures (your train-set), and a test candidate, it's as simple as:

vector<Mat> images; // keep for later
... train bow dictionary, and compute / collect bowFeatures from train-set:   
vector<Mat> bowTrain = ...;


Mat bowTest = computeBowFeatureFromTestImage(img);

int best=0;
double minDist = 999999999;
for (size_t i=0; i<bowTrain.size(); i++)
{
     double dist = norm(bowTrain[i], bowTest[i]); //calc L2 distance
     if (dist < minDist) // keep the one with smallest distance
     {
          minDist = dist;
          best = i;
     }
}

Mat bestImage = images[best];

but, ofc, this is a very primitive / blunt way to do it. in real life, you want to train something more sophisticated, like Knn or Svm.

if you had 200 SURF descriptors in your vocabulary, your bowDescriptor will have 200 numbers, each the distance from the SURF descriptor of your image to one of the features in the vocabulary.

to compare 2 images this way, you extract the bowFeatures for both images, and compare those (instead of comparing the images themselves, or the SURF descriptors)

the most simple algorithm for this is nearest-neighbour search.

given, you have a vector of bowFeatures (your train-set), and a test candidate, it's as simple as:

vector<Mat> images; // keep for later
... train bow dictionary, and compute / collect bowFeatures from train-set:   
vector<Mat> bowTrain = ...;


Mat bowTest = computeBowFeatureFromTestImage(img);

int best=0;
double minDist = 999999999;
for (size_t i=0; i<bowTrain.size(); i++)
{
     double dist = norm(bowTrain[i], bowTest[i]); //calc L2 distance
     if (dist < minDist) // keep the one with smallest distance
     {
          minDist = dist;
          best = i;
     }
}

Mat bestImage = images[best];

but, ofc, this is a very primitive / blunt way to do it. in real life, you want to train something a more sophisticated, sophisticated classifier, like Knn or Svm.

if you had 200 SURF descriptors in your vocabulary, your bowDescriptor will have 200 numbers, each the distance from the SURF descriptor of your image to one of the features in the vocabulary.

to compare 2 images this way, you extract the bowFeatures for both images, and compare those (instead of comparing the images themselves, or the SURF descriptors)

the most simple algorithm for this is nearest-neighbour search.

given, you have a vector of bowFeatures (your train-set), and a test candidate, it's as simple as:

vector<Mat> images; // keep for later
... train bow dictionary, and compute / collect bowFeatures from train-set:   
vector<Mat> bowTrain = ...;


Mat bowTest = computeBowFeatureFromTestImage(img);

int best=0;
double minDist = 999999999;
for (size_t i=0; i<bowTrain.size(); i++)
{
     double dist = norm(bowTrain[i], bowTest[i]); bowTest); //calc L2 distance
     if (dist < minDist) // keep the one with smallest distance
     {
          minDist = dist;
          best = i;
     }
}

Mat bestImage = images[best];

but, ofc, this is a very primitive / blunt way to do it. in real life, you want to train a more sophisticated classifier, like Knn or Svm.