Ask Your Question
0

overfitting when training SVM for gender classfication

asked 2015-11-24 08:03:41 -0600

CurtisFu gravatar image

Hi,

I'm using block-based uniform LBP as feature and training an SVM for gender clarification on face images.

My 1st trained SVM model is computed from 1200 male face images and 500 female face images. (My CvSVMParams setting is exactly the same as the OpenCV SVM example in http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html The result is not good and the hit rate is only about 6x%.

Then I try to improve the hit rate by adding more face images for training. I use this 1st trained SVM model to predict more face images and use the wrong classified ones as my additional training face images. So my 2nd trained SVM model is computed from 1200+200 male face images and 500+100 female face images. I'm expecting the second SVM model works better than the first one. However, the second one is always overfitted...

I'm wondering is there any other way to improve the hit rate and why my approach gets more inaccurate classified result. Hope anyone could kindly provide me some hints. Thanks.

edit retag flag offensive close merge delete

Comments

You need to increase the dimensionality of your input data or try non linear kernels!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-11-24 14:34:02 -0600 )edit

AFAIK, when there are more images for training, the accuracy performance should be better. To improve the recognition rate, you can try other advanced feature extraction methods, for example Local Phase Quantization (LPQ), or combine several methods together. To pinpoint why you get lower classification rate with more training images, you should provide more information: your code, your parameters' settings and your experimental images.

tuannhtn gravatar imagetuannhtn ( 2015-11-25 00:34:38 -0600 )edit

@tuannhtn that is not true, using a larger dataset can lead to a larger variance in object appearance. Such an appearance might not be that easy separable like the smaller dataset. Take for example the splitting of dark and light colors. Adding just black and white samples into a binary class will be fairly easy and high accuracy could be met with a small dataset. But if you want to seperate between lightgray and darkgray under different illumination conditions, then way more data is needed to even achieve the same accuracy.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-11-25 02:21:49 -0600 )edit

Yes, @StevenPuttemans, increasing training set's size does not always lead to improvement in accuracy: there is a threshold of this number so that even when you add more images to the training set, the accuracy does not improve. But it is usually do, before you reach the threshold.

tuannhtn gravatar imagetuannhtn ( 2015-11-25 03:54:08 -0600 )edit

I still do not agree, you are talking about oversampling and over fitting over a certain threshold. It all depends on the distribution of your data inside the feature space. I can easily add 1000 samples to a very simple weak classifier without increasing it's accuracy or generalisation power.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-11-25 04:11:23 -0600 )edit

Thanks to both of you for the discussion. Today I try several things but none of trial has better result, including: 1). use RBF kernel instead of linear one, 2). change block-based LBP(8x8 blocks for 32x32-pixel image) to image-based LBP (simply compute LBP for each pixel and their histogram in the whole image), 3). try normalize data to [-1, 1] (I'm not sure about the necessity of this step but LibSVM requires this step and many papers mentions the importance of normalization).

From your discussion above, I may do another trial to train my SVM with some simple face images (frontal face). I'm currently using LFW face database which may be too difficult.

CurtisFu gravatar imageCurtisFu ( 2015-11-25 05:50:32 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-11-25 07:20:43 -0600

@StevenPuttemans, I still keep my opinion, it comes from my experience with gender classification problem. Of course, you example is right, but innately, within the threshold, the accuracy should be better. @CurtisFu, your images are fairly small, you should try with at least 64x64 resolution images. I have tested with polynomial kernel function and LBP features on LFW database (over 12000 images) and the average recognition rate (cross validation test with 1/5 database for test set while the rest-4/5 for training) was above 90%.

edit flag offensive delete link more

Comments

Hehe no problem :) I just wanted to start out the discussion :)

StevenPuttemans gravatar imageStevenPuttemans ( 2015-11-25 07:27:32 -0600 )edit
1

Hi, @tuannhtn. I think you are right about increasing the dimension of the input data. After I try with 128x128 resolution, the hit rate increases to 8x%. Now I'm adding more training data and try to improve the robustness of the gender classification result, especially when the input image comes from a camera. Thanks.

CurtisFu gravatar imageCurtisFu ( 2015-11-25 23:35:36 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-11-24 08:03:41 -0600

Seen: 877 times

Last updated: Nov 25 '15