Ask Your Question
0

LBPHFaceRecognizer model

asked 2020-04-22 10:52:41 -0600

updated 2020-04-23 02:32:39 -0600

berak gravatar image

Hello everyone,

I am trying to train a LBPHFaceRecognizer model with photos from a lot of persons. The problem is that at a point I get an OutOfMemory error. I looked up at the model after I saved in a file and it seems to be saving the histograms of every photo and i think this is the problem. From my point of view it should be saving a representative set of features for every person (If I have 50 photos with the same label it should be saving 1 representative histogram (maybe the centroid of the histograms cluster), not 50). Maybe I am wrong, but if is there someone who knows better what happens in the training and predict phases for this algorithm or someone who used it for a big dataset, please reply.

Thank you, Bogdan

edit retag flag offensive close merge delete

Comments

1

it seems to be saving the histograms of every photo

this is correct (and this is all, that happens in the training phase, the prediction is just a 1-nearest neighbour search over that)

maybe the centroid of the histograms cluster

i don't think this is feasible (but please try and report back !)

from a lot of persons.

how many ? again, since it's a linear search (and not building a "global model"), you could split it up into several instances

berak gravatar imageberak ( 2020-04-23 02:42:07 -0600 )edit

Thank you for the reply. I am trying to train the model on a dataset that contains about 1500 persons and a like 50-100 photos for each person. I guess using so many photos with this LBPHFaceRecognizer is not the best approach.

Bogdan133 gravatar imageBogdan133 ( 2020-04-25 07:31:44 -0600 )edit

50-100 photos for each person.

well you certainly need a few. maybe you should do some cross-fold validation to find out the minimum number needed.

maths time: 64 x 256 x 4 = 64kb per image. for 1500 persons and 50 imgs each, we're at 5gb mem, that's quite a lot.

berak gravatar imageberak ( 2020-04-25 07:43:37 -0600 )edit

Try training neural network (cnn)

holger gravatar imageholger ( 2020-04-26 20:51:24 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2020-04-25 08:04:01 -0600

berak gravatar image

you could try to use different, shorter features instead of lbph.

opencv's dnn supports using the openface model, which makes a 128 float feature from an (bgr) image:

// https://storage.cmusatyalab.org/openface-models/nn4.small2.v1.t7
dnn::Net  net = readNet("nn4.small2.v1.t7");

// for each image:
Mat inputBlob = dnn::blobFromImage(img, 1./255, Size(96,112), Scalar(), true, false);
net.setInput(inputBlob);
Mat res = net.forward().clone();

// now cache the result Mat's in some database, and do your own
// nearest neighbour search using e.g. 

// L2 norm:
float dist = norm(a,b); 

// or a simple dot:
float dist = a.dot(b);

// or cosine distance:
float x = a.dot(b);
float y = a.dot(a);
float z = b.dot(b);
float dist = - x / sqrt(y*z);
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-04-22 10:52:41 -0600

Seen: 396 times

Last updated: Apr 25 '20