Ask Your Question

How to use openCV BOW API in android

asked 2018-01-14 13:41:32 -0500

BhanuPrasad gravatar image

I am using SURF FeatureDetector to compare static images with camera imageFeatureDetector .

But Berak suggested me I have to use BOW API for my usecase . Can any one provide me sample code or suggest me the steps how to use BOW API in android . Note : I have a knowledge on Java, but i don't have knowledge on C++, If you provide sample code it would be helpful for me. Because i have to submit my project in this week .

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2018-01-15 10:34:58 -0500

berak gravatar image

updated 2018-01-15 10:58:00 -0500

unfortunately again, opencv's BagOfWords classes are not useable from java, so you'll have to improvise ;)

1) build a BOW dictionary (you onlyneed to do that once). collect as many (SIFT or SURF, as you did before) decriptors as you can, from many images. if you want to retain 200 features later, you need at least 10x train features

 Mat features = new Mat(); // bow train data
 // for all images:
 descriptorExtractor.compute(sceneImage, sceneKeyPoints, sceneDescriptors);
 features.push_back( sceneDescriptors );

 // later, if you collected all features:
 // kmeans cluster on the features, the retained centers will be our BoW vocabulary:
 Mat bestLabels = new Mat();
 Mat vocab = new Mat();
 Core.kmeans(features, 100, bestLabels, new TermCriteria(), 3, vocab);

2) now, for any actual image, we will calculate a signature(bag of words feature). instead of matching the sceneDescriptor to a trainImage (as you did before), we will match them to our vocabulary, and have an array of counters, a histogram with one bin for every vocabulary word. it simply counts, which features in our test image matched our vocabulary.

  // (pseudo code, sorry, i don't have java to test)
  // for each image
  descriptorExtractor.compute(sceneImage, sceneKeyPoints, sceneDescriptors);
  matcher.match(vocab, sceneDescriptors, matches);
  float hist[] = new float[vocab.cols()];
  for (m: matches) {
       hist[ m.queryIdx ]++;
  Mat feature = new Mat(1, vocab.cols(), CV_32F);
  normalize(feature, feature);

  // compare:
  double distance = norm(feature1, feature2);

this array is the BoW signature for the image. we can compare those with norm() for simple similarity, or apply machine learning to train on object classes (SVM), or use it with knn search.

edit flag offensive delete link more



Thank you so much for your reply,Small doubt in your reply could you please clarify me . In point 2 you said we have to calculate the Bag of words for actual image.

  descriptorExtractor.compute(sceneImage, sceneKeyPoints, sceneDescriptors);
 matcher.match(vocab, sceneDescriptors, matches);

But in the above code again we are checking with sceneImage it should be actual-image right. please clarify me

BhanuPrasad gravatar imageBhanuPrasad ( 2018-01-15 11:15:14 -0500 )edit

^^ ofc. you're right, "actual_image" is much better worded here !

berak gravatar imageberak ( 2018-01-15 11:18:26 -0500 )edit

Thanks once again for the reply. in the code you are using

// compare: double distance = norm(feature1, feature2);

what is the feature1,feature2 objects, where can i get these objects.please help me.

BhanuPrasad gravatar imageBhanuPrasad ( 2018-01-15 11:25:32 -0500 )edit

if you have 2 images, you calc 1 featurevector per image, then you compare those

berak gravatar imageberak ( 2018-01-15 11:51:42 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower


Asked: 2018-01-14 13:41:32 -0500

Seen: 122 times

Last updated: Jan 15 '18