SVM training for 2 days, not done yet

asked 2015-09-10 02:13:28 -0600

thdrksdfthmn gravatar image

Hi, I Have started to train a SVM 2 classes classifier (nu_svm, poly kernel) two days ago and it is still training today. I have used 2 classes of 1008 positives and 1012 negatives of 70x16 pixels, but I used Dense keypoints detector and SIFT descriptors; so I have 4608 variables per image. Is it normal that it has not trained yet the classifier or crashed? Shall I let it train until it will finish, or there was a problem not mentioned, like a no-convergence? I have used a TermCriteria(1000, 0.01)

edit retag flag offensive close merge delete

Comments

2
  • no, that does not sound normal at all. i'm using similar data, and it should return within a few seconds / minutes for a single train() run.

  • you also have to set the flags in TermCriteria (which of the numbers should be used)

  • which opencv version is it (ml module differs significantly) ?

  • are you using trainAuto() (well, the gridsearch does many train/test iterations, and yes, this might take much longer) ?

  • last, be extra careful (at least with 3.0), when using nu, train() returns falsesilently, if your nu was to small or too big ! (ok, looked up the 2.4 version, it does an EXIT in that case

berak gravatar imageberak ( 2015-09-10 02:21:12 -0600 )edit

I just remember that I have not mentioned the version. I am using the 2.4.11 (built on Mars this year, if that matters). And yes, I use train_auto

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-09-10 02:33:16 -0600 )edit

Thanks for the remarks, but the idea is that it is just using almost 100% of (one) CPU all the time and it has not stopped... Ok, I will do a simple train with some values, to see what's happening and how long it will last in the train process...

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-09-10 03:00:12 -0600 )edit
3

built on Mars.. grin :p

boaz001 gravatar imageboaz001 ( 2015-09-10 03:16:04 -0600 )edit

Hey, I have done the training with fixed params (without auto_train) and it was very fast, less than one minute... Is it something broken in the train_auto, or the params values are too many, so it will keep training long-long time to find the best params?

From Mars ;p

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-09-11 05:04:41 -0600 )edit

oh, good to hear. there weresome issues , but i guess, youre just trying too many params on a too fine grid

berak gravatar imageberak ( 2015-09-11 05:22:39 -0600 )edit

Could the implicit grids be too fine?

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-09-11 06:11:42 -0600 )edit

http://docs.opencv.org/modules/ml/doc...

again, it's one complete train/test pass per iteration, so, if you enabled all of the grids, there will be a * b * c * d * e * f of them ...

(tbh, i mostly prefer clever guessing .. e.g. checking 1,10,100,1000,10000 for C (manually) is usually enough)

berak gravatar imageberak ( 2015-09-11 07:15:23 -0600 )edit

i hope, you did not try 1 for step ;)

berak gravatar imageberak ( 2015-09-11 07:33:29 -0600 )edit

Actually I did not created any grid. I thought that auto_train knows that for poly kernel it should not use C grid... Lol, if the init grids are using 1 for step, then it will go forever.... Any ideas of what kind of grills shall I use for nu_svc and poly kernel?

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-09-11 09:17:38 -0600 )edit