Color histogram for object recognition

asked 2015-08-03 02:37:01 -0600

ghost43 gravatar image

updated 2015-08-03 05:42:17 -0600

Hello,

I would like to recognise fruit using color histogram, and svm. But whatever i am doing, the result of the SVM is false. Does someone can give me link to use color histogram for object recognition please or just give some clues?

Actually, i am dividing an image into 12*12 blocks, and then extract rgb color histogram from it. I use this result for my SVM.

When i test the program, all the result seems bad, i think i am doing something wrong.

edit retag flag offensive close merge delete

Comments

1

12x12=144 * 256 * 3 ... i guess, you'll end up with a quite large feature vector this way.

  • beginners never use enough train data
  • SVM can be used with different kernels, try !
  • there are params like C, that need adjusting
berak gravatar imageberak ( 2015-08-03 07:00:32 -0600 )edit

Hi again :) So, it is not color histogram problem ? If you jcan ust tell me if i am right please, I divide the image into grids, in each grids, i use calcHist() to extract red, greenblue histogram, i made the average to construct my histogram ==> hit (1) = histRed.at(1) + histGreen.at(1) + histRed.at(1) I repeat this until i've done all the grids, and after that, i use the matrix for the svm

ghost43 gravatar imageghost43 ( 2015-08-03 08:10:36 -0600 )edit
2

so, how much train data is there ? how many classes ? what are your params ?

imho, your base idea does not sound too bad. (though, i'd rather go for HSV, and take hists of H and S only, and maybe less than 256 bins, and less than 12x12 patches)

berak gravatar imageberak ( 2015-08-03 08:19:24 -0600 )edit

My params : params.kernel_type = CvSVM::RBF; params.svm_type = CvSVM::C_SVC; params.gamma = 0.50625000000000009; params.C = 312.50000000000000;

data : 60 images

class : 4

ghost43 gravatar imageghost43 ( 2015-08-03 08:22:42 -0600 )edit
1

4 classes in only 60 images? Forget it :D Lets multiply it by a factor of 10 at least!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-08-03 10:41:21 -0600 )edit

60 images per class ^^ But ya i think this is not enough But about the parameter of the SVM, i read that cross validation is a good way to find the best param, but what are the interval for the parameter ? Because, if i have to test different parameter, i should have a interval i think .. Does opencv train_auto function use the best parameter ?

ghost43 gravatar imageghost43 ( 2015-08-03 10:55:14 -0600 )edit
1

60 images per class are a good first start, I have seen studies with much less (especially in the field of medical image processing). Try first a linear kernel and vary the C parameter 10^-3 - 10^7. And yes, cross-validation is the way to go, train_auto can do that for you in a nice way. If that works then to some degree, you can start to try out other kernels like the RBF kernel.

Guanta gravatar imageGuanta ( 2015-08-03 14:08:58 -0600 )edit

So i try the train auto to have the best parameter, but i got another issue .. When i am testing my program with two class, it works fine, for example, i train the svm with banana and apple, and whent i test with a picture of a banana, it works, but when i add a class to my training data, and use the one vs all strategy, it can't detect that it is a banana ..

ghost43 gravatar imageghost43 ( 2015-08-03 22:07:29 -0600 )edit

This is my parameter for the train_auto :

 CvParamGrid CvParamGrid_C(pow(2.0, -5), pow(2.0, 15), pow(2.0, 2));

CvParamGrid CvParamGrid_gamma(pow(2.0, -15), pow(2.0, 3), pow(2.0, 2));

CvSVMParams param;

param.kernel_type = CvSVM::RBF;

param.svm_type = CvSVM::C_SVC;

param.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 0.000001);

    classes_classifiers[class_].train_auto(samples_32f, labels, Mat(), Mat(),

        param, 10, CvParamGrid_C, CvParamGrid_gamma,

        CvSVM::get_default_grid(CvSVM::P),

        CvSVM::get_default_grid(CvSVM::NU),

        CvSVM::get_default_grid(CvSVM::COEF),

        CvSVM::get_default_grid(CvSVM::DEGREE), true);
ghost43 gravatar imageghost43 ( 2015-08-03 22:23:30 -0600 )edit
1

Your second problem (with adding a third class) is because your SVM has no idea on what the third class should be. In order to do a one vs all strategy you will need to use

  • For classifier 1: class 1 has label 1, class 2 & 3 have label 0
  • For classifier 2: class 2 has label 1, class 1 & 3 have label 0
  • For classifier 3: class 3 has label 1, class 1 & 2 have label 0

This will result in 3 SVM's that are able to seperate your data using the one vs all strategy.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-08-04 07:16:22 -0600 )edit