Ask Your Question
0

display mean, and fisher faces + image reconstruction

asked 2016-06-30 15:38:14 -0600

atv gravatar image

updated 2016-06-30 15:38:50 -0600

Hi all. So i added the code for displaying the mean, fisher faces and reconstruction of those fisher faces. I added it in webcam face recognition code. I have 3 sets of images for 3 persons.

2 things: 1. It display 1 image for mean (expected :-)) and 2 for fisher faces and fisher face reconstruction. I thought i would see 16 (well it does say up to, so what is it based on?), or 3 based on the amount of images i trained it with. 2. I'm doing a direct imshow after, but i don't see these outputs updated, they seem static to me. If that's normal, i guess all this is done on the trained model, not on the input the webcam is getting?

I guess i was looking for some cool picture that would be based on the input :-)

code:

// Here is how to get the eigenvalues of this Eigenfaces model:
            Mat eigenvalues = model->getEigenValues();
            // And we can do the same to display the Eigenvectors (read Eigenfaces):
            Mat W = model->getEigenVectors();
            // Get the sample mean from the training data
            Mat mean = model->getMean();

            imshow("mean", norm_0_255(mean.reshape(1, images[0].rows)));

            // Display or save the first, at most 16 Fisherfaces:
for (int i = 0; i < min(16, W.cols); i++) {
    string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
    cout << msg << endl;
    // get eigenvector #i
    Mat ev = W.col(i).clone();
    // Reshape to original size & normalize to [0...255] for imshow.
    Mat grayscale = norm_0_255(ev.reshape(1, im_height));
    // Show the image & apply a Bone colormap for better sensing.
    Mat cgrayscale;
    applyColorMap(grayscale, cgrayscale, COLORMAP_BONE);
    // Display or save:
        imshow(format("fisherface_%d", i), cgrayscale);}

    // Display or save the image reconstruction at some predefined steps:
for(int num_component = 0; num_component < min(16, W.cols); num_component++) {
    // Slice the Fisherface from the model:
    Mat ev = W.col(num_component);
    Mat projection = LDA::subspaceProject(ev, mean, images[0].reshape(1,1));
    Mat reconstruction = LDA::subspaceReconstruct(ev, mean, projection);
    // Normalize the result:
    reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
    // Display or save:
        imshow(format("fisherface_reconstruction_%d", num_component), reconstruction);}

Alef

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-07-01 01:52:49 -0600

berak gravatar image

updated 2016-07-01 01:59:55 -0600

you can see here, that only C-1 components are retained in the LDA (where C is the number of classes). if you have 3 persons, you can only have a maximum of 2 projections.

maybe it gets easier to understand, if you think of the LDA row vecs as "border" or "difference" representation between classes, for 2 classes, you need 1 border, for 3 classes 2 borders, etc.

(the example code was working on att faces with 40 individuals)

"but i don't see them updated"

you'll only get different images, if you retrain it with different input data, it is not using the signal from the camera at all.

edit flag offensive delete link more

Comments

Thanks berak - I understand now. I'll read up a bit more on how it works, but yes I thought as much, that is the trained data used to get those visualisations.

atv gravatar imageatv ( 2016-07-01 03:56:50 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-06-30 15:38:14 -0600

Seen: 122 times

Last updated: Jul 01 '16