# PCA Update Implementation - Discussion

Hello All.

This is a paper on merging and splitting eigenspaces. This is very useful if you have multiple datasets that you would like to join together. There is also a very nice opportunity to split eigen spaces, which could be useful to create a classifier that contains say.... face data. We could split the subsets of the eigenspace that only correspond to say certain states of the face, such as emotions or eye movements.

Anyway, the linear algebra is quite a challenge for me.

http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf

It is a 3 stage process, but each stage is quite involved. Outlined in section 3.1, preceeding sections outline the theory of the eigenfacerecognizer() method similar to that used in OpenCV.

The data you start with is the eigenvectors and means for the two models. By adding another dimension, you can calculate the rotation between the two models. From this rotation, as full set of eigenvectors, which includes the variation in both sets.

Here is an outline of the 3 steps provided in the paper, with characters changed to suit the format of this text input. The paper does break down the steps in the following section.

Step 1

Construct an orthonormal basis set, T, that spans both eigenspace models and meanX - meanY. This basis differs from the required eigenvectors, W, by a rotation, R, so that:

W = T R

Step2 Use T to derive a new eigenproblem. The solution of this problem provides the eigenvalues, S, needed for the merged eigenmodel. The eigenvectors, R, comprise the linear transform that rotates the basis set T.

Step 3 Compute the eigenvectors, W, as above and discard any eigenvectors and eigenvalues using the chosen criteria (as discussed above) to yield W and S

Anyone got an ideas here of what's going on? Any further clues as to how we could implement this using the matrix multiplication methods available in OpenCV?

edit retag close merge delete

Sort by » oldest newest most voted

Step 2 - Discussion

Create a new eigen problem by substituting.... here the main problem is creating the correct matrix with which to solve values.

T = [U,v];

into...

W = T R

To create an eigenproblem, this will yield R for the original equation.

more

Step 1 - Discussion

1) The subspace spanned by eigenvectors U, 2) the subspace spanned by eigenvectors V, and 3) the subspace spanned by meanX-meanY. The last of these is a single vector.

T = [U,v]

G =  transpose(U) V

H = V - U G

These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)

h = x U g (6)

v can now be computed by finding an orthonormal basis for [H, h], which is sufficient to ensure that ns is orthonormal. Gramm-Schmidt orthonormalization  may be used to do this: v =  Orthonormalize([H, h])

more

So, U (PxN) and V (PxM) are obtained by the PCA procedure. The vectors are the columns of these matrices. G (NxM) represents the projection of the vectors of V on U. H (PxM) holds the M vectors orthogonal to U (which then provide new information to the model). Since some of them could be described the U vectors, it's possibile that some of the columns of H are full of zeros. These have to be removed. Then you compute the new matrix Q = [U,H'], where H' is H without the zero columns, with size [P,N+M-#zeroVectors].

As for the means, I'm not sure of the step described in the paper, but I would proceed similarly to what e have done for V. h = (meanX-meanY); (Px1) g = transpose(Q)h; j = h - Qg; j is the component of h orthogonal to q, and if ...(more)

As for the orthonormalization, http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process is a standard procedure for doing it! Did I get what the problem was?

Haha. Yes, this is my understanding so far. However, it will take me some more time until i am confident that i can implement it. When it uses this notation [U,H'], is that just the matrices being concatenated? Removing the zeros from the H matrix will change it's shape, won't that affect the matrix multiplication?

Removing zeroes just alters the number of columns, while the rows are kept the same. Think of the dimensions as the rows being the dimensions of the features and the columns being just the number of vectors! The matrix form is there just to perform vector multiplications more easily.

In fact, to project a vector v (Px1) on a basis of N vectors with size (Px1) you should do the scalar product N times, but if you pack the N vectors in a matrix Q (PxN) you can do that easily by just computing transpose(Q)*v, and you get an Nx1 vector with the projection scores.

Moreover, given Q as a PxN matrix, since you always compute Q*trasponse(Q), the inner dimension N does not count, you always get a PxP matrix.

Step 3 - Discussion.

S is the eigenvalue matrix and R are eigenvectors, R, which is a rotation for T.

This now sounds like a simple PCA problem that could be handled with OpenCV's PCA function.

more

Official site

GitHub

Wiki

Documentation