Ask Your Question

Finding top similar images from a database using SIFT

asked 2016-03-14 02:25:14 -0500

nilkanth gravatar image

updated 2016-03-14 04:03:41 -0500

berak gravatar image

I am working on project and using SIFT features (OpenCV implementation) for image matching. I need to return top 10-15 images in the database which are similar to the query image. I'm using a visual bag-of-words approach to make a vocabulary first and then do the matching. I've found similar questions but didn't find the appropriate answer.

Here is code to generate a dictionary from database images:

char * filename = new char[100];        

Mat input;

//To store the keypoints that will be extracted by SIFT
vector<KeyPoint> keypoints;

//To store the SIFT descriptor of current image
Mat descriptor;

//To store all the descriptors that are extracted from all the images.
Mat featuresUnclustered;

//The SIFT feature extractor and descriptor
SiftDescriptorExtractor detector;

for(int f=1;f<20;f++)            // 20 images in database

    input = imread(filename, CV_LOAD_IMAGE_GRAYSCALE); //Load as grayscale              

    //detect feature points
    detector.detect(input, keypoints);

    //compute the descriptors for each keypoint
    detector.compute(input, keypoints,descriptor);      

    //put the all feature descriptors in a single Mat object 


int dictionarySize=200;

TermCriteria tc(CV_TERMCRIT_ITER,100,0.001);

int retries=1;


//Create the BoW (or BoF) trainer
BOWKMeansTrainer bowTrainer(dictionarySize,tc,retries,flags);

//cluster the feature vectors
Mat dictionary=bowTrainer.cluster(featuresUnclustered); 

//store the vocabulary
FileStorage fs("dictionary.yml", FileStorage::WRITE);
fs << "vocabulary" << dictionary;

Here's my code to extract a BoW descriptor from query image using this vocabulary:

Mat dictionary; 
FileStorage fs("dictionary.yml", FileStorage::READ);
fs["vocabulary"] >> dictionary;

Ptr<DescriptorMatcher> matcher(new FlannBasedMatcher);
Ptr<FeatureDetector> detector(new SiftFeatureDetector());
Ptr<DescriptorExtractor> extractor(new SiftDescriptorExtractor);    
BOWImgDescriptorExtractor bowDE(extractor,matcher);

char * filename = new char[100];
char * imageTag = new char[10];

//open the file to write the resultant descriptor
FileStorage fs1("descriptor.yml", FileStorage::WRITE);  

//the image file with the location. 
Mat img=imread(filename,CV_LOAD_IMAGE_GRAYSCALE);       

//To store the keypoints that will be extracted by SIFT
vector<KeyPoint> keypoints;     

//Detect SIFT keypoints (or feature points)

//To store the BoW (or BoF) representation of the image
Mat bowDescriptor;      

//extract BoW (or BoF) descriptor from given image

fs1 << imageTag << bowDescriptor;       


I don't know how can I make use of bowDescriptor for getting similar images in the database.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2016-03-14 03:52:04 -0500

berak gravatar image

updated 2016-03-14 04:05:27 -0500

after you successfully extracted bowDescriptors for your images, you can start training some machinelearning with those. since you want the closest 10-15 images returned from your matching, we'll use a flann::Index:

static cv::Ptr<cv::flann::Index> train_index(const Mat &trainData)
    // binary descriptor (ORB,BRISK,..)
    if (trainData.type() == CV_8U)
        return makePtr<cv::flann::Index>(trainData,
               cv::flann::LinearIndexParams(), cvflann::FLANN_DIST_HAMMING);
    // float descriptor (SIFT,SURF,..)
    return makePtr<cv::flann::Index>(trainData,
           cv::flann::LinearIndexParams(), cvflann::FLANN_DIST_L2);

Mat getBow(const Mat &img, Ptr<FeatureDetector> detector, BOWImgDescriptorExtractor &bowDE)
    //Detect SIFT keypoints (or feature points)
    vector<KeyPoint> keypoints;             

    //extract BoW (or BoF) descriptor from given image
    Mat bowDescriptor;             

    return bowDescriptor;

// 1.a: collect train data (from your image db):
Mat trainData;
for (int i=0; i<numTrainImages; i++)
     Mat bowFeature = getBow(trainimage, detector, bowDE);

// 1.b: train your index:
Ptr<cv::flann::Index> index  = train_index(trainData);

// 2.a: prepare test feature:
Mat bowTest = getBow(testimage, detector, bowDE);

// 2.b: no you can predict closest items:
int K=15;
cv::Mat dists, indices;
index->knnSearch(bowTest, indices, dists, K);

cerr << indices << endl;

caveat here: you have to make sure, your trainData is still valid, when you're trying to predict (the flann code is running away with a raw pointer to the float data)

edit flag offensive delete link more


@break Can u please explain me how to use this knnSearch ?,since I new to opencv ,can you share me example Thanks

nilkanth gravatar imagenilkanth ( 2016-03-15 05:24:59 -0500 )edit

there is an example in front of your eyes, please look again.

berak gravatar imageberak ( 2016-03-15 07:39:49 -0500 )edit

Yes,u are right ,but u didnt get me , I just want to know the how to use indices after knnSearch

nilkanth gravatar imagenilkanth ( 2016-03-16 04:18:51 -0500 )edit

the indices are the same as for your bowFeatures / trainImages

berak gravatar imageberak ( 2016-03-16 06:10:56 -0500 )edit

Ok, I m getting output on console as: indices [8,1,5,2,4,3,0] What does this mean? Thanks for helping to resolve my issues.

nilkanth gravatar imagenilkanth ( 2016-03-17 06:27:23 -0500 )edit

it simply means: image number 8 is the closest to your test img, folowed by 1,5,2,...

note, that to make a good bow dictionary, you should use 1000+ images, not 20.

berak gravatar imageberak ( 2016-03-17 06:37:06 -0500 )edit

it simply means: image number 8 is the closest to your test img, followed by 1,5,2,...

note, that to make a good bow dictionary, you should use 1000+ images, not 20.

berak gravatar imageberak ( 2016-03-17 06:37:27 -0500 )edit
Login/Signup to Answer

Question Tools



Asked: 2016-03-14 02:25:14 -0500

Seen: 1,662 times

Last updated: Mar 14 '16