# SVM Parameters ? [closed]

Hi, I am trying to classify 4 different Human actions based on their 7 Humoments using SVM. So far detection isn't really working so well and everything is being detected as one particular action.

I'm wondering if anyone has any suggestions on how to improve this possibly in terms of the SVM parameters. I'm currently using the following:

//Setup training parameters
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::RBF;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);


Thanks

edit retag reopen merge delete

### Closed for the following reason the question is answered, right answer was accepted by berak close date 2015-10-25 04:27:23.458363

Could you provide more details about the descriptors used? Also, what you mean with "actions". I think the problem is there and not in the SVM params

( 2015-10-21 16:57:29 -0500 )edit

Hi, thanks for the response.

By actions I mean : Waving hands above head, fighting (boxing), crouching and standing.

If by descriptors you mean the "features" of the training matrix; then I'm using Hu's 7 invariant moments to classify a blob within the frame. Hu's moments are used to describe the shape and other features of an object and are commonly used identify Human actions/behaviour.

By the way, when testing I'm using the same data (frames) that I used to train with.

( 2015-10-21 17:45:04 -0500 )edit

How many frames per action are you using? Are you sure they are different enough? I'm still not sure that it will work using just Hu moments...

( 2015-10-22 04:15:29 -0500 )edit
2

How will a blob dimensions give you a good classification between actions o_O I would think that you would track movements of kinematic positions of the body, like knees, shoulders, elbows and feed that to an SVM to do action recognition. Using HU moments is never going to work properly in my opinion ... especially if you go outside the lab environment ...

( 2015-10-22 04:15:45 -0500 )edit
1

@StevenPuttemans exactly what I was thinking about, but I though I was missing something... quite difficult, yep

( 2015-10-22 04:23:44 -0500 )edit

Thanks for the responses. I am using 2500 frames per action for training.

@StevenPuttemans don't Hu moments describe the shape of an object ? To me the shape of a Human in action is a much better descriptor than the location of points on a body which can vary greatly depending on the location in the frame and which can be the same for multiple actions ?

This only has to work in a controlled (lab) environment which is fine. The actions I have are very different, for example a Human with his hands above his head is a very different shape to that of a Human who is crouched.

I read online somewhere that using the "signum’ed log of the absolute values of the Hu Moments" greatly improves the results. Does anyone have any info on this ? I don't know what they mean by "signum'ed" ?

Thanks

( 2015-10-22 06:49:28 -0500 )edit

You might need to look up some scientific papers, but I am still not convinced. Shapes are affected by the segmentation process needed to get them and since tons of research have proven that segmentation is never failsafe, I am not sure that this will work great. The downside of HU moments is that different shapes can have similar HU moments, like for example the center of gravity.

( 2015-10-22 07:33:38 -0500 )edit

@StevenPuttemans Hi, I took a look at the Humoments of the different actions. They are significantly different. The problem arises when trying to predict the action.

When using svm.predict() and manually supplying the humoments for the crouching position, it returns the label of the hand waving action (whose moments are not similar at all).

This makes me think that the training is not working properly. I do save the training as a .xml and then load it when performing the predict testing in another source file.

( 2015-10-22 16:02:24 -0500 )edit

Thats possible but for that we need all info, training data, parameters, model, ...

( 2015-10-23 03:29:55 -0500 )edit

@StevenPuttemans I tested without saving and loading the trained SVM; same problem. Note the humoments for the different actions below. Obviously they differ slightly for each sample (for example, depending on where your arms are) but 5/7 of the descriptors remain the same while increasing arm height may increase 2 of them by 1 or 2 etc. When predicting by manually supplying the Humoments from the crouching action, it returns label 1 for handwaving even though clearly the....

Hand-Wave -0.17889, -0.031579, 0.026785, -0.831988, 1.246625, -1.88745,-1.868726

Fight -0.313689, 0.99165, 1.825292, 1.160456, 2.184154, -1.608319, -2.6267

Crouching -3.22782, 6.448374, 10.27783, 10.2856255, 20.56735, -13.5098, -17.65686

Standing -0.20444, -0.55100, -0.75639, -1.21419, -2.21461, -1.4969, -2.7855

( 2015-10-23 10:07:37 -0500 )edit

Sort by » oldest newest most voted

Hi, to finally answer my own question; the problem was indeed with the code and not the features.

What I did was to first populate a training data and label ARRAY and then convert them to a MATRIX. This doesn't seem to work. Instead I created MATRICES from the get go and populated them directly. Prediction then works correctly.

more

1

Hmm weird... though I must say that I always use vector<type> for arrays. Anyway, you can mark your answer as correct

( 2015-10-24 12:54:49 -0500 )edit

Official site

GitHub

Wiki

Documentation