In bag of visual words, how to pass descriptors instead of keypoints?
I'm implementing the Bag of Features model through OpenCV.
My workflow is the following:
- Compute SIFT keypoints & descriptors for each image in the dataset
- Using the descriptors and
cv::BOWKMeansTrainer
, compute thek
centroids usingk
-means algorithm viacv::BOWKMeansTrainer.cluster()
- Using
cv::BOWImgDescriptorExtractor
compute theword
of each imageimg
from the dataset (same for a query) throughcompute(img,keyPoints,word)
(we can usekeyPoints
that we computed during step 1.).
The problem is in point 3: I think that compute
compute again the descriptors of img
. This is terribly inefficient, we already computed the needed descriptors in step 1.!
How can I call compute
passing the already computed descriptors?
Notice that I didn't check the implementation of compute, but I'm quite sure (from my understanding of the BoF model) that interally is going to compute the descriptors.
you only worry about this, because the training set for your dictionary (steps 1. and 2. above) overlaps with your svm(or whatever you use) training data, which is usually not the case.
Really? For image-retrieval system, this is absolutely the case! :D And plenty of other applications too, I guess.
what about this compute() overload then ?