# Bag of Words/Featrues + Locality Sensity Hashing for Content-based image retrieval (CBIR) implementation

First of all: I don't know if this the right forum to post this question, I'm sorry if it's not.

I'm trying to implement a Content-based image retrieval (CBIR) system.

In order to do that, I'm trying to combine two models:

- Bag of Features (BoF) for converting an image to a vector (histogram of features)
- Locality Sensity Hashing to find the most similar image of the given image query (both expressed through vectors, thanks to phase 1).

This is ad-hoc diagram that I created for this question (please, be kind on that, it's like a son for me :D )

We can summarize the entire process in the following steps:

**Phase 1: Histogram Creation (Offline, preprocessing):**

- For each image
`i`

, compute the set of keypoints and descriptors`i1...id`

, where`d`

is the number of keypoints/descriptors per image - Given the whole set of descriptors, run the
`k`

-means on that. - The result is is the dictionary of features: a matrix
`k x 128`

(if we use SIFT's descriptor) where each row is a centroid **Here is my first question:**how do we obtain the histogram (so a vector`1xk`

) for each image`i`

? Someone here suggested me to use the radius of each cluster in order to find the cluster that the descriptor belongs to, but I don't know to implement this.

**Phase 2: Query Processing**

- Given query image
`q`

, compute keypoints/descriptors`q1...qd`

as before - Given the dictionary computed in phase 1, compute the histogram of
`q`

.**Notice that the same problem in point 4 of phase 1 occurs here again**, so the proposed solution must be valid for both dataset and queries images. - In order to find the "most similar image", we solve the 1-approximate nearest neighbor problem (1-ANN) through Locality Sensity Hashing algorithm (LSH).

Now that I described the whole procedure, **my second question is: is this a good approach for implementing CBIR? There are other solutions? What are the possible pros/cons?**

## Comments

- those folks seem to follow a very similar approach
- phase 2: you could use a flann::index with lsh here, but that will need (offline) training, too..