This forum is disabled, please visit https://forum.opencv.org

1 | initial version |

Please decide to either file a bug if you think there's one, post on the OpenCV QA page or on stackoverflow.com. I found at least these three versions:

- http://answers.opencv.org/question/2141/opencv-242-facerec_democpp-interpreting-output-of/
- http://stackoverflow.com/questions/12311923/opencv-2-4-2-facerecognizer-class-predict-function-interpretation
- http://code.opencv.org/issues/2340

So which one should I answer?

First of all how to interpret the labels. As outlined in the tutorials, each of the labels is assigned to a person, just take a look at the CSV file you feed into the demo. For example the images for `s1`

(subject 1) in the AT&T Database are assigned to `0`

:

```
/path/to/at/s1/2.pgm;0
/path/to/at/s1/7.pgm;0
/path/to/at/s1/6.pgm;0
...
```

While the images for `s2`

(subject 2) are assigned to 1:

```
/path/to/at/s2/2.pgm;1
/path/to/at/s2/7.pgm;1
/path/to/at/s2/6.pgm;1
...
```

So each label corresponds to the images of a person. I hope that makes sense.

Now let's get to your predicted confidence values. Actually the values you report on the Stackoverflow post make no sense to me. I have no idea how you can have negative values for the best matching face, because in the code I determine the Euclidean Distance, which should always be positive:

```
double dist = norm(_projections[sampleIdx], q, NORM_L2);
```

I just updated to the latest OpenCV revision in git and ran the Eigenfaces sample provided in the tutorial. To get the prediction and associated confidence I commented out the first prediction and wrote the following code. I am on a 32-bit Ubuntu 10.04:

```
// To get the confidence of a prediction call the model with:
//
int predictedLabel = -1;
double confidence = 0.0;
model->predict(testSample, predictedLabel, confidence);
//
// Output the prediction:
string result_message = format("Predicted class = %d / Actual class = %d / Confidence = %f", predictedLabel, testLabel, confidence);
cout << result_message << endl;
```

Running the demo on the CSV file given in the tutorial I get the follwoing output:

```
philipp@mango:~/git/facerecsamples_build$ ./facerec_eigenfaces /home/philipp/facerec/data/at.txt
Predicted class = 37 / Actual class = 37 / Confidence = 1806.542475
Eigenvalue #0 = 2817234.89109
Eigenvalue #1 = 2065223.71308
Eigenvalue #2 = 1096613.63515
Eigenvalue #3 = 888103.94982
Eigenvalue #4 = 818941.86977
Eigenvalue #5 = 538914.47401
Eigenvalue #6 = 392433.54243
Eigenvalue #7 = 373805.54654
Eigenvalue #8 = 313921.17233
Eigenvalue #9 = 288902.01563
```

So the distance is 1806.542475 for me, a value which I would expect.

Determining the optimal threshold values is not a trivial task, so I can't give trivial answers on this. The threshold depends on your input data and as far as I know, there's no rule to calculate it. I would find the threshold by simply cross validating it on my input data.

If the your problem persists, please file a bug report and give me as much details as necessary to reproduce the problem.

2 | No.2 Revision |

Please decide to either file a bug if you think there's one, post on the OpenCV QA page or on stackoverflow.com. I found at least these three versions:

- http://answers.opencv.org/question/2141/opencv-242-facerec_democpp-interpreting-output-of/
- http://stackoverflow.com/questions/12311923/opencv-2-4-2-facerecognizer-class-predict-function-interpretation
- http://code.opencv.org/issues/2340

So which one should I answer?

First of all how to interpret the labels. As outlined in the tutorials, each of the labels is assigned to a person, just take a look at the CSV file you feed into the demo. For example the images for `s1`

(subject 1) in the AT&T Database are assigned to `0`

:

```
/path/to/at/s1/2.pgm;0
/path/to/at/s1/7.pgm;0
/path/to/at/s1/6.pgm;0
...
```

While the images for `s2`

(subject 2) are assigned to 1:

```
/path/to/at/s2/2.pgm;1
/path/to/at/s2/7.pgm;1
/path/to/at/s2/6.pgm;1
...
```

So each label corresponds to the images of a person. I hope that makes sense.

Now let's get to your predicted confidence values. Actually the values you report on the Stackoverflow post make no sense to me. I have no idea how you can have negative values for the best matching face, because in the code I determine the Euclidean Distance, which should always be positive:

```
double dist = norm(_projections[sampleIdx], q, NORM_L2);
```

I just updated to the latest OpenCV revision in git and ran the Eigenfaces sample provided in the tutorial. To get the prediction and associated confidence I commented out the first prediction and wrote the following code. I am on a 32-bit Ubuntu 10.04:

```
// To get the confidence of a prediction call the model with:
//
int predictedLabel = -1;
double confidence = 0.0;
model->predict(testSample, predictedLabel, confidence);
//
// Output the prediction:
string result_message = format("Predicted class = %d / Actual class = %d / Confidence = %f", predictedLabel, testLabel, confidence);
cout << result_message << endl;
```

Running the demo on the CSV file given in the tutorial I get the follwoing output:

```
philipp@mango:~/git/facerecsamples_build$ ./facerec_eigenfaces /home/philipp/facerec/data/at.txt
Predicted class = 37 / Actual class = 37 / Confidence = 1806.542475
Eigenvalue #0 = 2817234.89109
Eigenvalue #1 = 2065223.71308
Eigenvalue #2 = 1096613.63515
Eigenvalue #3 = 888103.94982
Eigenvalue #4 = 818941.86977
Eigenvalue #5 = 538914.47401
Eigenvalue #6 = 392433.54243
Eigenvalue #7 = 373805.54654
Eigenvalue #8 = 313921.17233
Eigenvalue #9 = 288902.01563
```

So the distance is 1806.542475 for me, a value which I would expect.

Determining the optimal threshold values is not a trivial task, so I can't give trivial answers on this. The threshold depends on your input data and as far as I know, there's no rule to calculate it. I would find the threshold by simply cross validating it on my input data.

If the ~~your ~~problem persists, please file a bug report and give me as much details as necessary to reproduce the problem.

Copyright OpenCV foundation, 2012-2018. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.