OpenCV FaceRecognizer 3.3.0 with java, Mismatching images.
I have a few test images(13) in file(dir) and 50 trained images in another file(trainingDir).
When I try to test the test images with the trained images list, a few are matching (labels are returned correctly) but a few are mismatching even though they are existed.
How to find it out the mismatched images by passing threshold to the constructor.
when I pass threshold and no. of components(eigen faces), something goes wrong with matched labels. They are mismatching then, predicted labels are wrongly returned .
How to avoid these mismatching.
Below is my code:
public class OpenCVFaceRecognizer {
static File dir = new File("/home/venkatesh/Pictures/FR-Images/Test1");
static String trainingDir = "/home/venkatesh/Pictures/FR-Images/trainingFaces";
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
if (dir.isDirectory()) {
File[] listFiles = dir.listFiles();
Arrays.sort(listFiles);
for (final File f : listFiles) {
predictFace(f);
}
}else{
System.out.println("Not a directory!!");
System.exit(0);
}
}
public static void predictFace(File f){
Mat testImage = imread(f.getAbsolutePath(), Imgproc.COLOR_GRAY2RGB);
File root = new File(trainingDir);
FilenameFilter imgFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
name = name.toLowerCase();
return name.endsWith(".jpg") || name.endsWith(".pgm") || name.endsWith(".png");
}
};
File[] imageFiles = root.listFiles(imgFilter);
Vector<Mat> images =new Vector<>(imageFiles.length);
Mat labels = new Mat(imageFiles.length, 1, CV_32SC1);
int counter = 0;
for (File image : imageFiles) {
Mat img = imread(image.getAbsolutePath(), CV_LOAD_IMAGE_GRAYSCALE);
int label = Integer.parseInt(image.getName().split("_")[0]);
images.add(counter, img);
labels.put(counter,0, label);
counter++;
}
FaceRecognizer faceRecognizer = EigenFaceRecognizer.create();
faceRecognizer.train(images, labels);
int[] label = {-1};
double[] confidence = {0.0};
faceRecognizer.predict(testImage, label, confidence);
int predictedLabel = label[0];
System.out.println("Predicted label: " + predictedLabel + " Distance :"+ confidence[0]/1000);
}
}
Output:
- Predicted label: 76 Distance :0.6879670545920532
- Predicted label: 76 Distance :1.2013331537399639
- Predicted label: 76 Distance :1.863555635130535
- Predicted label: 88 Distance :2.3640474213981806
- Predicted label: 66 Distance :2.9285842098873553
- Predicted label: 66 Distance :2.3156894397998764
- Predicted label: 66 Distance :2.525213592806841
- Predicted label: 79 Distance :2.3647210914286783
- Predicted label: 92 Distance :3.8993551613685513
- Predicted label: 79 Distance :3.8066827136184176
- Predicted label: 92 Distance :4.443677492587241
- Predicted label: 88 Distance :2.949079331858225
- Predicted label: 88 Distance :3.1199972445479807
In output, 10th one is mismatched. (79 instead of 92).
To avoid this mismatching, I have passed parameters into constructor
changed code is below:
FaceRecognizer faceRecognizer = EigenFaceRecognizer.create(1, 110);
faceRecognizer.train(images, labels);
int[] label = {-1};
double[] confidence = {0.0};
faceRecognizer.predict(testImage, label, confidence);
int predictedLabel = label[0];
System.out.println("Predicted label: " + predictedLabel + " Distance :"+ confidence[0]/1000);
In create(1, 110), 1 is no. of components and 110 is threshold.
Output:
- Predicted label: 76 Distance :0.010607112241999062
- Predicted label: 76 Distance :0.00935673483193318
- Predicted label: 76 Distance :0.0245086870763173
- Predicted label: 79 Distance :0.026988126077856122
- Predicted label: 88 Distance :0.008182496766205987
- Predicted label: 88 Distance :0.012984324699383251
- Predicted label: 88 Distance :0.0011827737487510603
- Predicted label: 79 Distance :0.009255683204992692
- Predicted label: 88 Distance :0.041966132865250985
- Predicted label: 66 Distance :0.10178422613652219
- Predicted label: 88 Distance :0 ...
1: num_components=1 is for sure wrong. the default(0) will retain as may eigenvecs, as you have images.
then, you need like a dozen images per person for the training
2: that's irrealistic. the distance is large for bad predictions, and small for good ones. but you probably should not rely on that at all, it only tells you, how good the prediction was, not at all if it was correct or not. (if you're unlucky, the distance between 2 images of the same person might still be larger, than the distance between 2 images of different persons, that's why you need many images per person, again.)
Here if num_components is other than 1, predicted label returns only -1 for all. I have 13 test images and 91(13*7) trained images. so what can i take values for num_components and threshold here?
again, better leave both values alone.
are your images cropped properly ? did you apply a facedetection before ? have you tried to align the images, so the eyes aree on a horizontal row ?
For all your questions, my answer is yes. They are cropped and aligned properly and equal in size too. Okay I will leave the values alone but not clearing the issue. Please post if anything to say.
In general prediction is working fine but for unknown images it is failing. If there is any tutorial, can help me. please post it. Thank you
opencv has always had a margin of error in facial recognition. It is not effective
it is not meant to work with "unknown" persons.
those classes do: nearest 1 out of x from a db. that's it.
if your use-case requires something different, youhave to use a different techniqure.