Ask Your Question
1

Bag of Words - SVM Classification - reg

asked 2014-08-06 01:14:07 -0500

sabariaug23 gravatar image

I'm doing an Object detection & Recognition concept in OpenCV C++.

I'm using the BOW API available in OpenCV.

The classifier used is SVM.

Number of Object classes was set to 20. Number of Images for training was set to 50.

I'm using SVM Classifier.

The code is here. http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O

But the classification rate is poor. How can I improve?

How to select number of images for training for each object?

How to select the dictionary size?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
4

answered 2014-08-06 03:05:12 -0500

Guanta gravatar image

Tips for improving standard BoW:

  1. Features:

    • Typically the SIFT descriptors are computed densely over the complete image (i.e. use as detector 'Dense' and maybe try different step sizes)
    • Encode some locality, i.e. either add the (normalized) x,y coordinates to the sift descriptors or (currently more often used so far) use a spatial pyramid (e.g. a spatial pyramid of level 2 means you divide your image in 4 parts and compute for each part the BoW-descriptor additionally to the regular BoW-descriptor resulting in a 5 times larger final descriptor which you pass further to the classifier)
  2. Vocabulary:

    • Your dictionary size seems very low, typical values range from 10^3 to 10^5, however this depends much on the application.
  3. Classification:

    • Use grid-search to find the optimum SVM parameters, also try other Kernel than just the linear Kernel

A more general advice: always separate your training and test sets, the implementation seems to mix that, however than the test case will always be biased.

edit flag offensive delete link more

Comments

How can I decide the Dictionary Size? My application is object Detection & Recognition from a wild Database. Which may include thousands of object classes.

sabariaug23 gravatar imagesabariaug23 ( 2014-08-06 23:28:37 -0500 )edit

I'd start with a fairly large dictionary size of 10^5, of course if you have troubles with fitting that in memory you have to think of other ways. This is why it is also called "the curse of dimensionality", however the higher the dimensionality the easier it is typically to find a separating hyperplane for the classifier (aka "blessing of dimensionality") - of course only if the features are suited for your task. Options to reduce the dimensionality afterwards are to use PCA or product quantization, for the classifier part you can switch from SVM to SGD (if you have enough training material per class).

Guanta gravatar imageGuanta ( 2014-08-07 03:30:08 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2014-08-06 01:14:07 -0500

Seen: 1,942 times

Last updated: Aug 06 '14