FaceRecognizer returns always the same label

asked 2014-01-13 03:15:30 -0600

souraklis gravatar image

updated 2020-11-28 07:55:50 -0600

I am having problems using opencv's face recognitzer algorithm. I am training my model using images of 5 different persons. In predict process I am always receiving as predicted label the same label 1. The images I am using for training are cropped and aligned(black n white) of the persons.

My code for training: (V channel from hsv colormap)

images = dbreading.trainImages; //2d Mat vector with 
labels = dbreading.trainLabels; // vector with labels

Ptr<FaceRecognizer> model =  createEigenFaceRecognizer();

model->train(images, labels);
model->save("eigenfaces.yml");

My code for predict:

// Detections tested image cropped aligned (V channel from hsv colormap) 
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
model->load("eigenfaces.yml");
cout << "The size of the detected image is width: " << detections.cols << "height: " << detections.rows << endl;

// And get a Prediction from the cv::FaceRecognizer:
int predicted_label;
predicted_label= model->predict(detections);

I am guessing that I am missing a classic mistake here. What is the usual reasons when recognizer stuck in the same label? I push_back images in vector images with their initial size. Do I have to reshape them as vectors before the training process??? Do I have to randomize the order of the train images? Does it have some meaning for the training process??

EDIT: I ve discovered that .yml file contains zeros inside. So the train process is totally wrong. Is it necessary to normalize the values of the pixel? Is it possible that the arising problems is due to lack of normalization??

edit retag flag offensive close merge delete

Comments

As I know, The eigenface implementation does not allow saving the model. It could be the reason of the problem. To verify, I suggest you to replace model->load("eigenfaces.yml"); by train(images, labels) from the scratch.

dervish79 gravatar imagedervish79 ( 2014-01-13 04:16:26 -0600 )edit

what do you mean by "normalize the values of the pixel" ? They must be in graylevel .

dervish79 gravatar imagedervish79 ( 2014-01-13 04:40:35 -0600 )edit

In matlab implementation, I divide the pixels with 255 so as to normalize them to 0-1. Basically it seems that my problem stands with h-s-v value channel. When I use the simple rgb it works. My question now: if it is necessary here to reshape the image from matrix to be a vector 1xn in train process.

souraklis gravatar imagesouraklis ( 2014-01-13 04:49:55 -0600 )edit

Saving the model is working fine!!

souraklis gravatar imagesouraklis ( 2014-01-13 04:50:44 -0600 )edit
1
  • "I am training my model using images of 5 different persons" - you'll need like 10-20 images per person
  • "Do I have to reshape them as vectors before the training process?" - no.(that's done internally already)
  • "Do I have to randomize the order of the train images?" - no.
  • "necessary to normalize the values of the pixel?" - you can try that, but usually applying equalizeHist() does better.

just remember that you'll have to do exactly the same preprocessing for your train & test images.

berak gravatar imageberak ( 2014-01-13 05:27:06 -0600 )edit

Did you get a solution to this? I have the same issue.

jastreich gravatar imagejastreich ( 2014-09-22 16:44:07 -0600 )edit