Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

First the training images are organized into a single training dataset matrix where each image is a single column/row (doesn't matter which you choose but it will affect later steps). We then get the covariance matrix of this combined training dataset matrix.

We then find a subset of the eigenvectors of the covariance matrix corresponding to the largest eigenvalues. These eigenvectors (or eigenfaces) become the new basis vectors in our PCA subspace (so if we chose 5 eigenvectors, the PCA subspace will be 5D). We can then project each image row/column from our training data matrix into the PCA subspace. This will retrieve us reduced dimensionality representations of the training images.

Why use the eigenvectors (corresponding to the largest eigenvalues) of the covariance matrix? There are nice proofs showing that projecting the data along the eigenvectors (corresponding to the largest eigenvalues) of the covariance matrix preserves the most possible variance of the original training dataset. Please keep in mind though, we are not considering intraclass/interclass differences, we are just looking at the training image dataset as a whole here.

Once we project all the training dataset images into PCA subspace we can use their reduced dimensionality representations for classification. When a test image comes, we project it into the same PCA subspace. Then we use whatever classifier you like. For OpenCV Eigenfaces they find and return the k-nearest neighbor (k=1) of the reduced dimensionality test image. They also return a confidence value which is the Euclidean distance between the reduced dimensionality test image and the closest reduced dimensionality training dataset image. If this distance does not meet some threshold you set, you can conclude that (maybe) this facial image does not belong to any of the people in your training dataset.

Of course, if you implement PCA yourself, you can classify however you like. For example, here is a python sci-kit learn tutorial where SVM is used.

Upsides/downsides of Eigenfaces: Please see the original Eigenfaces paper where a nice discussion is given. One main advantage is that PCA dimensionality reduction helps get rid of redundancy/less use information in our data. One main disadvantage discussed by the authors is that differences in setting (such as illumination, face pose/orientation, background, etc.) will negatively affect our results. Another disadvantage is that intraclass/interclass differences are not really directly considered (we only considered the variance of the training dataset as a whole). If you want to consider them, take a look at LDA (Fisherfaces).

Important note: This is face identification (not authentication or verification). Given a test image, you are predicting the closet subject from your training dataset.

It is probably in your best interest to read the paper and take a look at machine learning courses as advised above