FaceRecognizer returns always the same label
I am having problems using opencv's face recognitzer algorithm. I am training my model using images of 5 different persons. In predict process I am always receiving as predicted label the same label 1. The images I am using for training are cropped and aligned(black n white) of the persons.
My code for training: (V channel from hsv colormap)
images = dbreading.trainImages; //2d Mat vector with
labels = dbreading.trainLabels; // vector with labels
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
model->train(images, labels);
model->save("eigenfaces.yml");
My code for predict:
// Detections tested image cropped aligned (V channel from hsv colormap)
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
model->load("eigenfaces.yml");
cout << "The size of the detected image is width: " << detections.cols << "height: " << detections.rows << endl;
// And get a Prediction from the cv::FaceRecognizer:
int predicted_label;
predicted_label= model->predict(detections);
I am guessing that I am missing a classic mistake here. What is the usual reasons when recognizer stuck in the same label? I push_back images in vector images with their initial size. Do I have to reshape them as vectors before the training process??? Do I have to randomize the order of the train images? Does it have some meaning for the training process??
EDIT: I ve discovered that .yml file contains zeros inside. So the train process is totally wrong. Is it necessary to normalize the values of the pixel? Is it possible that the arising problems is due to lack of normalization??
As I know, The eigenface implementation does not allow saving the model. It could be the reason of the problem. To verify, I suggest you to replace model->load("eigenfaces.yml"); by train(images, labels) from the scratch.
what do you mean by "normalize the values of the pixel" ? They must be in graylevel .
In matlab implementation, I divide the pixels with 255 so as to normalize them to 0-1. Basically it seems that my problem stands with h-s-v value channel. When I use the simple rgb it works. My question now: if it is necessary here to reshape the image from matrix to be a vector 1xn in train process.
Saving the model is working fine!!
just remember that you'll have to do exactly the same preprocessing for your train & test images.
Did you get a solution to this? I have the same issue.