Ask Your Question
3

How to perform Linear Discriminant Analysis with OpenCV

asked 2015-06-15 08:58:45 -0600

jackbrucesimspon gravatar image

updated 2015-06-29 22:42:33 -0600

Final Update: Based on the help berak gave me, I've written a new question including all the code that he has helped me with in one place, but also with the aim to calculate the probability when classifying instead of finding the nearest datapoint.

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors
Mat features = lda.project(trainData);

Then when I want to predict I take my ... (more)

edit retag flag offensive close merge delete

Comments

1

i had a look at the code, and:

lda.project(feature); // without mean

is the same as:

LDA::subspaceProject(eigenvectors, Mat(), feature); // just take an empty Mat, if you have no mean.

on the other hand, you could use reduce to acquire a mean feature vector:

   Mat mean; 
   reduce(trainData, mean, 0, cv::REDUCE_AVG, CV_64F);
   // (internal data in lda is double, so we need same type.)

you probably have to try if it works better with or without ;)

berak gravatar imageberak ( 2015-06-17 01:49:50 -0600 )edit
1

then, for prediction, just take the norm to find the closest dist:

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<projected.rows; i++)
{
    double d = norm( projected.row(i), projectedTestFeature);
    if (bestDist < d)
    {
          bestDist = d;
          bestId = i;
    }
}
int predicted = labels.at<int>(bestId); // there we are ! ;)
berak gravatar imageberak ( 2015-06-17 01:52:56 -0600 )edit

Thank-you so much, I think I might understand now. I wrote a second update to the question using your answers and code which I think is correct, is there any chance you could check that I've finally understood? I really can't thank-you enough.

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-17 09:14:30 -0600 )edit
1

update2: i made a typo, it's labels.at<int>(bestId); in the last line.

then, you probably can project the whole trainData Mat in 1 go, it does not need to iterate over rows.

i think it can handle classlabels on rows or cols, but 1 per row seems the best fit (since your traindata is like that).

berak gravatar imageberak ( 2015-06-17 09:48:46 -0600 )edit
1

Ah I see! So instead of iterating through I can just use "Mat features = lda.project(trainData);" to extract the matrix without iterating (I changed that in update 2). Then to predict I project the new 1D tag array "Mat proj_tag = lda.project(new_tag)" and can iterate through the features matrix when I compare distance between the normalised projected row and the test feature. Does that sound about right?

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-17 16:28:45 -0600 )edit
1

yes, sounds right.

berak gravatar imageberak ( 2015-06-18 00:36:17 -0600 )edit

Wonderful, I'm working to implement it now with my tag extraction program. Berak, thank-you so much for your patience and wonderful explanations!

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-18 02:42:11 -0600 )edit

Ok! Update 3 has the code I've now written based on your wonderful advice and am integrating into the program so I really hope it works. I've extracted 2376 16*16 pixel images of the 3 tag types (they're in different folders) and used them to create the training set. Hope everything looks ok!

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-28 13:43:24 -0600 )edit

hmm for some reason I keep getting this error: "Image step is wrong (The matrix is not continuous, thus its number of rows can not be changed) in reshape". How strange.

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-28 15:18:18 -0600 )edit

Think I fixed it! Although since this question is getting a little crowded, I hope it's ok that I moved to a new question based on the code you taught me. I was hoping it might be possible to expand the predict to be based on probability rather than the nearest point. Thanks so much again and completely understand if you don't have time to answer.

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-29 22:39:59 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
6

answered 2015-06-16 01:10:42 -0600

berak gravatar image

updated 2015-06-16 01:30:38 -0600

opencv's LDA is quite simple to use:

    LDA lda(num_components); // retain N elements (e.g. numClasses-1)
    lda.compute(trainData, trainLabels); // compute eigenvectors

    Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

but it comes with a restriction: you need more rows than cols in your trainData Mat, which means, if you're e.g. using 100x100 pixel images as features, you got 10000 row elements, so you either need more than 10000 images, or you have to shorten your row vectors.

that's why usually a PCA is applied in front of the LDA, to reduce the feature vectors to about the size of the image-count. all in all we got this:

// we need to keep 4 items from the training, to do tests later:
// Mat labels; // class labels
// Mat mean; // mean from trainData
// Mat eigenvectors // projection Matrix
// Mat projections  // cached preprojected trainData (so we don't need to do it again and again)
void train(const Mat &trainData, const Mat &trainLabels)
{
    set<int> classes;
    for (size_t i=0; i<trainLabels.total(); ++i)
         classes.insert(trainLabels.at<int>(i));
    int C = classes.size(); // unique labels
    int N = trainData.rows;
    int num_components = (C-1); // to keep for LDA

    // step one, do pca on the original data:
    PCA pca(trainData, Mat(), cv::PCA::DATA_AS_ROW, (N-C));
    mean = pca.mean.reshape(1,1);

    // step two, do lda on data projected to pca space:
    Mat proj = pca.project(trainData);

    LDA lda(proj, trainLabels, num_components);

    // step three, combine both:
    Mat leigen;
    lda.eigenvectors().convertTo(leigen, pca.eigenvectors.type());
    gemm(pca.eigenvectors, leigen, 1.0, Mat(), 0.0, eigenvectors, GEMM_1_T);

    // step four, keep labels and projected dataset:
    Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        // here's the actual magic. we don't use the lda's eigenvecs,
        // but the *product* of pca and lda eigenvecs to do the projection:
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

 // later:
 int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }
edit flag offensive delete link more

Comments

Thank-you so much! I'm identifying small tags that are about 1616 pixels and plan to train my projection with several thousand tags I can automatically extract (so I end up with a matrix with 256 columns and a couple of thousand rows). Does this mean I can use LDA directly without the PCA step? Once I have my eigenvectors can I then perform the subspace project step on the 1D matrix of a new 1616 tag to predict which class it belongs to?

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-16 04:13:54 -0600 )edit
2

yes, you probably can skip the pca step in that case.

berak gravatar imageberak ( 2015-06-16 04:17:26 -0600 )edit
1

Hi berak, thank-you so much for the help you've given me. I updated the question with some of your code where I'm still a little confused, I'm really sorry if some of the questions are basic, I cannot express how grateful I am to you.

jackbrucesimspon gravatar imagejackbrucesimspon ( 2015-06-16 21:26:02 -0600 )edit

Just a quick question, how are you combing the PCA and LDA eigenvectors here, and why? gemm(pca.eigenvectors, leigen, 1.0, Mat(), 0.0, eigenvectors, GEMM_1_T);

Elador gravatar imageElador ( 2015-06-30 03:00:04 -0600 )edit
1

Really cool PCA+LDA explanation. I'm playing with params and faced with follow issue: if I leave PCA with the same components number, eigenvalue of LDA will be zero. Sure sample cols < rows. If I decrease PCA components at least by 1, LDA.eigenvalues will have maximum. Any ideas? Thanks.

oktay gravatar imageoktay ( 2018-03-15 06:34:36 -0600 )edit

Question Tools

5 followers

Stats

Asked: 2015-06-15 08:58:45 -0600

Seen: 5,456 times

Last updated: Jun 29 '15