OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 01 Apr 2014 11:48:10 -0500PCA Update Implementation - Discussionhttp://answers.opencv.org/question/30992/pca-update-implementation-discussion/Hello All.
This is a paper on merging and splitting eigenspaces. This is very useful if you have multiple datasets that you would like to join together. There is also a very nice opportunity to split eigen spaces, which could be useful to create a classifier that contains say.... face data. We could split the subsets of the eigenspace that only correspond to say certain states of the face, such as emotions or eye movements.
Anyway, the linear algebra is quite a challenge for me.
http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf
It is a 3 stage process, but each stage is quite involved. Outlined in section 3.1, preceeding sections outline the theory of the eigenfacerecognizer() method similar to that used in OpenCV.
The data you start with is the eigenvectors and means for the two models. By adding another dimension, you can calculate the rotation between the two models. From this rotation, as full set of eigenvectors, which includes the variation in both sets.
Here is an outline of the 3 steps provided in the paper, with characters changed to suit the format of this text input. The paper does break down the steps in the following section.
Step 1
Construct an orthonormal basis set, T, that spans both
eigenspace models and meanX - meanY. This basis differs from the
required eigenvectors, W, by a rotation, R, so that:
W = T R
Step2
Use T to derive a new eigenproblem. The solution of this
problem provides the eigenvalues, S, needed for the
merged eigenmodel. The eigenvectors, R, comprise the
linear transform that rotates the basis set T.
Step 3
Compute the eigenvectors, W, as above and discard any
eigenvectors and eigenvalues using the chosen criteria (as
discussed above) to yield W and S
Anyone got an ideas here of what's going on? Any further clues as to how we could implement this using the matrix multiplication methods available in OpenCV?Tue, 01 Apr 2014 03:59:11 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/Answer by MRDaniel for <p>Hello All.</p>
<p>This is a paper on merging and splitting eigenspaces. This is very useful if you have multiple datasets that you would like to join together. There is also a very nice opportunity to split eigen spaces, which could be useful to create a classifier that contains say.... face data. We could split the subsets of the eigenspace that only correspond to say certain states of the face, such as emotions or eye movements.</p>
<p>Anyway, the linear algebra is quite a challenge for me.</p>
<p><a href="http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf">http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf</a></p>
<p>It is a 3 stage process, but each stage is quite involved. Outlined in section 3.1, preceeding sections outline the theory of the eigenfacerecognizer() method similar to that used in OpenCV.</p>
<p>The data you start with is the eigenvectors and means for the two models. By adding another dimension, you can calculate the rotation between the two models. From this rotation, as full set of eigenvectors, which includes the variation in both sets.</p>
<p>Here is an outline of the 3 steps provided in the paper, with characters changed to suit the format of this text input. The paper does break down the steps in the following section.</p>
<p>Step 1</p>
<p>Construct an orthonormal basis set, T, that spans both
eigenspace models and meanX - meanY. This basis differs from the
required eigenvectors, W, by a rotation, R, so that:</p>
<p>W = T R</p>
<p>Step2
Use T to derive a new eigenproblem. The solution of this
problem provides the eigenvalues, S, needed for the
merged eigenmodel. The eigenvectors, R, comprise the
linear transform that rotates the basis set T.</p>
<p>Step 3
Compute the eigenvectors, W, as above and discard any
eigenvectors and eigenvalues using the chosen criteria (as
discussed above) to yield W and S</p>
<p>Anyone got an ideas here of what's going on? Any further clues as to how we could implement this using the matrix multiplication methods available in OpenCV?</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30995#post-id-30995Step 2 - Discussion
Create a new eigen problem by substituting.... here the main problem is creating the correct matrix with which to solve values.
T = [U,v];
into...
W = T R
To create an eigenproblem, this will yield R for the original equation.Tue, 01 Apr 2014 04:12:50 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30995#post-id-30995Answer by MRDaniel for <p>Hello All.</p>
<p>This is a paper on merging and splitting eigenspaces. This is very useful if you have multiple datasets that you would like to join together. There is also a very nice opportunity to split eigen spaces, which could be useful to create a classifier that contains say.... face data. We could split the subsets of the eigenspace that only correspond to say certain states of the face, such as emotions or eye movements.</p>
<p>Anyway, the linear algebra is quite a challenge for me.</p>
<p><a href="http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf">http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf</a></p>
<p>It is a 3 stage process, but each stage is quite involved. Outlined in section 3.1, preceeding sections outline the theory of the eigenfacerecognizer() method similar to that used in OpenCV.</p>
<p>The data you start with is the eigenvectors and means for the two models. By adding another dimension, you can calculate the rotation between the two models. From this rotation, as full set of eigenvectors, which includes the variation in both sets.</p>
<p>Here is an outline of the 3 steps provided in the paper, with characters changed to suit the format of this text input. The paper does break down the steps in the following section.</p>
<p>Step 1</p>
<p>Construct an orthonormal basis set, T, that spans both
eigenspace models and meanX - meanY. This basis differs from the
required eigenvectors, W, by a rotation, R, so that:</p>
<p>W = T R</p>
<p>Step2
Use T to derive a new eigenproblem. The solution of this
problem provides the eigenvalues, S, needed for the
merged eigenmodel. The eigenvectors, R, comprise the
linear transform that rotates the basis set T.</p>
<p>Step 3
Compute the eigenvectors, W, as above and discard any
eigenvectors and eigenvalues using the chosen criteria (as
discussed above) to yield W and S</p>
<p>Anyone got an ideas here of what's going on? Any further clues as to how we could implement this using the matrix multiplication methods available in OpenCV?</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30993#post-id-30993Step 1 - Discussion
1) The subspace spanned by eigenvectors U, 2) the
subspace spanned by eigenvectors V, and 3) the subspace
spanned by meanX-meanY. The last of these is a single vector.
T = [U,v]
G = transpose(U) V
H = V - U G
These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)
h = x U g (6)
v can now be computed by finding an orthonormal basis for
[H, h], which is sufficient to ensure that ns is orthonormal.
Gramm-Schmidt orthonormalization [12] may be used to do this:
v = Orthonormalize ([H, h])Tue, 01 Apr 2014 04:08:03 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30993#post-id-30993Comment by Luca for <p>Step 1 - Discussion</p>
<p>1) The subspace spanned by eigenvectors U, 2) the
subspace spanned by eigenvectors V, and 3) the subspace
spanned by meanX-meanY. The last of these is a single vector.</p>
<p>T = [U,v]</p>
<p>G = transpose(U) V</p>
<p>H = V - U G</p>
<p>These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)</p>
<p>h = x U g (6)</p>
<p>v can now be computed by finding an orthonormal basis for
[H, h], which is sufficient to ensure that ns is orthonormal.
Gramm-Schmidt orthonormalization [12] may be used to do this:
v = Orthonormalize ([H, h])</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31055#post-id-31055Removing zeroes just alters the number of columns, while the rows are kept the same.
Think of the dimensions as the rows being the dimensions of the features and the columns being just the number of vectors! The matrix form is there just to perform vector multiplications more easily.
In fact, to project a vector v (Px1) on a basis of N vectors with size (Px1) you should do the scalar product N times, but if you pack the N vectors in a matrix Q (PxN) you can do that easily by just computing transpose(Q)*v, and you get an Nx1 vector with the projection scores.
Moreover, given Q as a PxN matrix, since you always compute Q*trasponse(Q), the inner dimension N does not count, you always get a PxP matrix.Tue, 01 Apr 2014 11:48:10 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31055#post-id-31055Comment by Luca for <p>Step 1 - Discussion</p>
<p>1) The subspace spanned by eigenvectors U, 2) the
subspace spanned by eigenvectors V, and 3) the subspace
spanned by meanX-meanY. The last of these is a single vector.</p>
<p>T = [U,v]</p>
<p>G = transpose(U) V</p>
<p>H = V - U G</p>
<p>These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)</p>
<p>h = x U g (6)</p>
<p>v can now be computed by finding an orthonormal basis for
[H, h], which is sufficient to ensure that ns is orthonormal.
Gramm-Schmidt orthonormalization [12] may be used to do this:
v = Orthonormalize ([H, h])</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31003#post-id-31003So, U (PxN) and V (PxM) are obtained by the PCA procedure. The vectors are the columns of these matrices.
G (NxM) represents the projection of the vectors of V on U.
H (PxM) holds the M vectors orthogonal to U (which then provide new information to the model). Since some of them could be described the U vectors, it's possibile that some of the columns of H are full of zeros. These have to be removed.
Then you compute the new matrix Q = [U,H'], where H' is H without the zero columns, with size [P,N+M-#zeroVectors].
As for the means, I'm not sure of the step described in the paper, but I would proceed similarly to what e have done for V.
h = (meanX-meanY); (Px1)
g = transpose(Q)*h;
j = h - Q*g;
j is the component of h orthogonal to q, and if it is not 0 you can append it to Q=[Q,j]Tue, 01 Apr 2014 06:38:28 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31003#post-id-31003Comment by Luca for <p>Step 1 - Discussion</p>
<p>1) The subspace spanned by eigenvectors U, 2) the
subspace spanned by eigenvectors V, and 3) the subspace
spanned by meanX-meanY. The last of these is a single vector.</p>
<p>T = [U,v]</p>
<p>G = transpose(U) V</p>
<p>H = V - U G</p>
<p>These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)</p>
<p>h = x U g (6)</p>
<p>v can now be computed by finding an orthonormal basis for
[H, h], which is sufficient to ensure that ns is orthonormal.
Gramm-Schmidt orthonormalization [12] may be used to do this:
v = Orthonormalize ([H, h])</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31004#post-id-31004As for the orthonormalization, http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process is a standard procedure for doing it! Did I get what the problem was?Tue, 01 Apr 2014 06:39:32 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31004#post-id-31004Comment by MRDaniel for <p>Step 1 - Discussion</p>
<p>1) The subspace spanned by eigenvectors U, 2) the
subspace spanned by eigenvectors V, and 3) the subspace
spanned by meanX-meanY. The last of these is a single vector.</p>
<p>T = [U,v]</p>
<p>G = transpose(U) V</p>
<p>H = V - U G</p>
<p>These zero vectors are removed to leave H . We also compute the residue h of neanY-meanX with respect to the eigenspace of using (6)</p>
<p>h = x U g (6)</p>
<p>v can now be computed by finding an orthonormal basis for
[H, h], which is sufficient to ensure that ns is orthonormal.
Gramm-Schmidt orthonormalization [12] may be used to do this:
v = Orthonormalize ([H, h])</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31007#post-id-31007Haha. Yes, this is my understanding so far. However, it will take me some more time until i am confident that i can implement it. When it uses this notation [U,H'], is that just the matrices being concatenated? Removing the zeros from the H matrix will change it's shape, won't that affect the matrix multiplication?Tue, 01 Apr 2014 06:55:46 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?comment=31007#post-id-31007Answer by MRDaniel for <p>Hello All.</p>
<p>This is a paper on merging and splitting eigenspaces. This is very useful if you have multiple datasets that you would like to join together. There is also a very nice opportunity to split eigen spaces, which could be useful to create a classifier that contains say.... face data. We could split the subsets of the eigenspace that only correspond to say certain states of the face, such as emotions or eye movements.</p>
<p>Anyway, the linear algebra is quite a challenge for me.</p>
<p><a href="http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf">http://www.cs.cf.ac.uk/Dave/Papers/Pami-EIgenspace.pdf</a></p>
<p>It is a 3 stage process, but each stage is quite involved. Outlined in section 3.1, preceeding sections outline the theory of the eigenfacerecognizer() method similar to that used in OpenCV.</p>
<p>The data you start with is the eigenvectors and means for the two models. By adding another dimension, you can calculate the rotation between the two models. From this rotation, as full set of eigenvectors, which includes the variation in both sets.</p>
<p>Here is an outline of the 3 steps provided in the paper, with characters changed to suit the format of this text input. The paper does break down the steps in the following section.</p>
<p>Step 1</p>
<p>Construct an orthonormal basis set, T, that spans both
eigenspace models and meanX - meanY. This basis differs from the
required eigenvectors, W, by a rotation, R, so that:</p>
<p>W = T R</p>
<p>Step2
Use T to derive a new eigenproblem. The solution of this
problem provides the eigenvalues, S, needed for the
merged eigenmodel. The eigenvectors, R, comprise the
linear transform that rotates the basis set T.</p>
<p>Step 3
Compute the eigenvectors, W, as above and discard any
eigenvectors and eigenvalues using the chosen criteria (as
discussed above) to yield W and S</p>
<p>Anyone got an ideas here of what's going on? Any further clues as to how we could implement this using the matrix multiplication methods available in OpenCV?</p>
http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30997#post-id-30997Step 3 - Discussion.
S is the eigenvalue matrix and R are eigenvectors, R, which is a rotation for T.
This now sounds like a simple PCA problem that could be handled with OpenCV's PCA function.Tue, 01 Apr 2014 04:43:18 -0500http://answers.opencv.org/question/30992/pca-update-implementation-discussion/?answer=30997#post-id-30997