Ask Your Question
0

L2 relative error

asked 2014-06-26 01:19:23 -0600

Marcelo gravatar image

Hello,

I'm studying opencv for facial recognition and I read about L2 relative error in "Mastering OpenCV book". What is L2 relative error? Is classical formula or a new proposed formula?

Regards, Marcelo

edit retag flag offensive close merge delete

Comments

It's for sure not a new formula, guess they just mean the Euclidean (=L2) norm. Btw. the most haven't read this book, so a link to the page you are referring to would be helpful.

Guanta gravatar imageGuanta ( 2014-06-26 04:09:51 -0600 )edit

Thanks for help.

I can't a link to download, only a github page: https://github.com/MasteringOpenCV/code/blob/master/Chapter8_FaceRecognition/recognition.cpp

Marcelo gravatar imageMarcelo ( 2014-06-30 23:11:06 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-06-30 23:15:16 -0600

Marcelo gravatar image

updated 2014-06-30 23:16:29 -0600

The code is this:

// Compare two images by getting the L2 error (square-root of sum of squared error).
double getSimilarity(const Mat A, const Mat B)
{
    if (A.rows > 0 && A.rows == B.rows && A.cols > 0 && A.cols == B.cols) {
        // Calculate the L2 relative error between the 2 images.
        double errorL2 = norm(A, B, CV_L2);
        // Convert to a reasonable scale, since L2 error is summed across all pixels of the image.
        double similarity = errorL2 / (double)(A.rows * A.cols);
        return similarity;
    }
    else {
        //cout << "WARNING: Images have a different size in 'getSimilarity()'." << endl;
        return 100000000.0;  // Return a bad value
    }
}

If L2 relative error is a Euclidean Distance, what is this "Predict" and "Confidence" in opencv FaceRecognizer class?

Regards, Marcelo

edit flag offensive delete link more

Comments

1

this should have been your original question ! ;)


the 'Confidence' value is the distance from the test-image to the closest one in the train-database (and the 'Prediction' is the respective class label)


if the model allows to reconstruct an image (eigenfaces), you can do a second, validation check, to find out how good or bad the prediction was. (that's probably, why he called it 'relative error').

getSimilarity() is used here to find the distance between the same test-image, and another, synthetic image, reconstructed from the trained model. (main.cpp, 463)

berak gravatar imageberak ( 2014-07-01 00:12:25 -0600 )edit

Thanks berak!

Could you explain, in mathematical words, Confidence and Prediction. These values use K-Nearest Neighbors, Euclidean Distance? If yes...How work?

Thanks, Marcelo

Marcelo gravatar imageMarcelo ( 2014-07-02 10:46:27 -0600 )edit

Marcelo, i'm not sure, if i understand you.

  • the 'confidence' is the 1-nearest neighbour euclidean distance. (chi-square in the lbp case)

  • the Prediction is the id or label of the person recognized (you had to supply 1 label for each image in the training, remember ? that's exactly, what you get back here)

  • maybe, looking at the code helps ? there's no magic there, loop through the data, save the one with the smallest distance.

berak gravatar imageberak ( 2014-07-02 11:03:16 -0600 )edit

Thank you so much berak,

I finally understand these functions.

If you can help me a little more, can you talk what the best metric for face recognize, because de confidence and the L2 error and cannot work properly.

Thanks, Marcelo

Marcelo gravatar imageMarcelo ( 2014-07-02 13:29:47 -0600 )edit

hey, feel free to try out other distances there (i.e, just change the flag & recompile).

if your problem is more like: 'im getting bad results', then you have to think about preprocessing your images before training/testing. illumination(equalizeHist, CLAHE), rotation/scale (so the eyes are on a horizontal line).

the bad news here is, that plugging in some classifier(like opencv's face-reco code) is easy, the real work starts after that.

berak gravatar imageberak ( 2014-07-02 13:47:28 -0600 )edit

ok...Thanks

Marcelo gravatar imageMarcelo ( 2014-07-02 14:50:20 -0600 )edit

Hello berak,

The preprocessing in training images and the input image are the same?

In my work, I use graysacale, CLAVE and Bilateral Filter, in the training images (100 images) and also on input image (webcam) but I'm getting bad results.

Have a suggestion?

Regards and sorry my english. Marcelo Porto Alegre/Brasil

Marcelo gravatar imageMarcelo ( 2014-07-06 16:07:09 -0600 )edit

yes, preprocessing for train and test images must be the same.

berak gravatar imageberak ( 2014-07-07 01:10:25 -0600 )edit

Question Tools

Stats

Asked: 2014-06-26 01:19:23 -0600

Seen: 4,157 times

Last updated: Jun 30 '14