How to calculate generalized Hough voting of SIFT features for content-based image retrieval?

asked 2018-02-19 15:21:03 -0600

S.EB gravatar image


I am trying to simulate the method in one paper and I have the problem in the understanding the steps of implementation. In which the authors, in the offline procedure, extract SIFT features from the training set and store in an inverted-index form. During the line procedure, a query mass is given to be matched with all training images through Hough voting of SIFT features. A similarity score also is calculated to estimate the similarity of query image and retrieved images.

I have done the following steps:

  • extracted SIFT features from the database and query (test) images. Descriptors were saved into des_train and des_query respectively.
  • visual vocabularies are created by k-means clustering. I created the tf-idf table with vocab_size=100 (for example) for both training and testing datasets.
  • I am stocked here: which I have to extract a tuple {(v_k,p_k)for k=1:n} for a training image (x) , where n is the number of features extracted from image x, and v_k is k-th visual word id andp_k is the position of v_k from the center of object in training image x and it is denoted by p_i=[x_i,y_i].
  • for a given query image q, it is matched with all training images d as similarity map is calculated in a matrix of same size of q and its element at the position p indicates the similarity between the region of q centered at position p. The matching is based on generalized hough voting of SIFT features.

My question is two-fold:

  1. How to extract information at the third above-mentioned point? Is this visual vocabulary matching or descriptor matching?
  2. Does anyone know how can I get the matching score between query image and training images through Hough voting of SIFT (4th point)?

I am quite new in CBIR, your expert opinion is really appreciated. If there is any resources or codes, could you please share with me?

edit retag flag offensive close merge delete


do you have a link to that paper ? (it is probably impossible to understand your question without)

berak gravatar imageberak ( 2018-02-20 06:45:32 -0600 )edit

Yes I add the link here Thanks

S.EB gravatar imageS.EB ( 2018-02-20 09:54:59 -0600 )edit

uhmm, and what's your context ? are you trying to build a "general" cbir application ?

that paper seems to be very specific about learning the shape of special blobs in mammography.

berak gravatar imageberak ( 2018-02-20 10:46:25 -0600 )edit

Thanks for your efforts in helping me, Yes I want to simulate the procedure of retrieving the similar images to the query image and then obtain the shape prior. I do not understand how to calculate the similarity score that I have mentioned previously (3rd and 4th points). It seems retrieval procedure is based on the score that is calculated by Hough. moreover, I do not understand how it extracts the tuple of feature and center distance, is it comparison of descriptors or visual words? If it is calculated for every descriptor in the image, how this is feasible?!!!

S.EB gravatar imageS.EB ( 2018-02-20 11:27:13 -0600 )edit