How to reduce the size of the trained SVM model in Opencv3.0?

asked 2017-07-09 21:45:42 -0500

updated 2017-07-09 22:00:39 -0500

berak gravatar image

I have three image classify problem, and feature dimension of LBP each problem are 6400, with number of samples 6000. Now I have trained the three SVM models. Each size of the model is bout 20M. Because I want to transplant the project to android, so I want to compress the total size of three models within 20M.

The image's lbp feature is sparse, So I tried to use PCA to perform feature reduction. But the size mapping matrix of 500 dimension each classify problem is about 30M, and that is too big.

Is there are any other way to solve my problem?

edit retag flag offensive close merge delete

Comments

  • you can save it to bla.xml.gz (and compress to like half of it)
  • why do you have *3 seperate * svm models ?
  • yea, true, if you want to use a PCA to get from 6000x6400 to 500, your features get smaller, but the needed projection matrix is huuuuge!
  • there are alternatives to pca compression: dct(transform to freq. space, throw away half of it, transform back), random projection, walsh-haddarmard, - but i'd think, that 6000 features are not that much, and that compressing them further will degrade classification results.
berak gravatar imageberak ( 2017-07-09 21:54:26 -0500 )edit

Thank you. Because I have three classification tasks corresponding to two eyes and mouth respectively. I need to judge the status of them.

allthewaynorth gravatar imageallthewaynorth ( 2017-07-10 03:29:02 -0500 )edit

oh, apologies, i misread it as in: 3 one-against-all models, that could be a "multiclass model", but - not so.

berak gravatar imageberak ( 2017-07-10 03:33:38 -0500 )edit

It's ok. You are welcome.

allthewaynorth gravatar imageallthewaynorth ( 2017-07-10 09:39:10 -0500 )edit