# Reduce Size of LBPH trained model [closed]

hi I have only 6 class, and in each class,10 pics 100*100 (size of all classes together= 300 kb) but at last, the size of yml is bigger than all pics (6 Mb)

how can I delete unnecessary features that lbp extract during training process? without reduce in accuracy!!!

NOTE:

Ptr<FaceRecognizer> model = createLBPHFaceRecognizer(1,8,4,4);


lbph.yml size = 2 Mb (from 6 Mb)

reduce size of images from 100*100 to 50*50


lbph.yml size = 5.5 Mb (from 6 Mb)

edit retag reopen merge delete

### Closed for the following reason question is not relevant or outdated by sturkmen close date 2020-10-15 14:18:09.367171

Size of trained data(lbph.yml) is not related yo your Input image size! You can try to compress the resultant file.

( 2017-03-08 02:56:14 -0500 )edit

Note that i want to pick less features without reduce in accuracy to have a good lbph.yml with less content, not compress a file! @Balaji R also i said about input image size, so no one can say it again,its just prevention .

( 2017-03-08 03:03:09 -0500 )edit

again, for the most part, it's the filestorage's text representation, which blows it up in such a way. if you would write the Mat's straight to disk, you'd end up with a few kb only.

the number of feature per image is: grid x grid x 256. (this does NOT depend on image size !)

if you're willing to hack the implementation, you could reduce it significantly, e.g. by trying "uniform" histograms (only 59 instead of 256 bins) or, for small grids, even storing the histograms as uchar (not float), then you could simply save them as png

( 2017-03-08 03:25:08 -0500 )edit