Ask Your Question

Patricia Star's profile - activity

2016-06-21 11:46:17 -0600 asked a question How to speed up prediction LBPHFaceRecognizer?

Hi there, at first the LBPHFaceRecognizer takes grey images and labels to train and then grey images to predict the label. When I looked at the source code it looks like internally spacial histograms are compared to find the nearest neighbour. So I provide this training (and later predict) method not a whole face-image but an "concatenated" image of cropped out parts of the face on important points (like eyes, nose...). It looks like this:

    vector<Point2f> imagePoints //11 points
    vector<Mat> rectange recs = 16x16 rectangle around every image point 
    vector<Mat> croppedImage =  originalImage(recs[i]);
    Mat result = concatenated cropped images
    save all training results in a vector cv::Mat result
    LBPHFFaceRecognizer model->predict(result, labels)

So because it's comparing histograms it should be no problem using this image instead of a "real" face, am I right?

Anyway, the prediction is quite slow approx 300-350ms (the images/Mat-objects have 16 cols, 11*16 rows)

Has anyone some ideas how to speed this up? Maybe some kind of "normalization" before training and predicting?

Thanks for your help

2016-06-21 08:05:56 -0600 received badge  Enthusiast
2016-06-17 07:31:33 -0600 received badge  Student (source)
2016-06-17 05:30:20 -0600 commented answer How to generate key points by myself

Thanks for that idea, I will give this a try. For the identification I use 11 landmarks (image points) i get them using the face-landmark-detector from dlib.

2016-06-17 04:35:08 -0600 commented answer How to generate key points by myself

Thanks for your answer. I tried this before, but the recognition of the identity using the svm is super bad. A friend said, that he thinks it is because if I only use these three values (x,y,size) the response is default 0 and the angle is -1, that leads to keypoints which are not meaningful at all. I also tried to set the response to the same value for all keypoints and the angle to 0, but the results were really bad also :-(

2016-06-17 03:50:28 -0600 commented question How to generate key points by myself

I'm trying to train an SVM on these descriptors. The points are image points on a persons face (e.g eyes, mouth...) and to train an svm on this descriptors to later be able to do person identification i need exactly the same points on every training image (which i have) but then also the descriptors on this points. Hope this makes it understanable :-)

2016-06-16 06:36:55 -0600 asked a question How to generate key points by myself

Hi, i want to compute SURF descriptors on my own key points. The problem is, currently I only have 2d image points (x,y-coordinates), but opencv's method for calculating SURF descriptors needs keypoints, which need beside the coordinates additional information like, scale, size, orientation,... (information which I do not have).

Please help -> How can I generate correct Keypoints out of my 2d image points? (Correct means, I want to have key points on exactly the coordinates of my image points).

Thanks in advance Patricia