Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Mat features; 
for (int i=0; i<trainData.rows; i++)
{
    Mat projected = lda.project(feature_row);
    features.push_back(projected);
}

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
double d = norm(features.row(i), pro_tag);
if (bestDist < d)
{
      bestDist = d;
      bestId = i;
}

} int predicted = labels[bestId]; // there we are ! ;)

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Mat features; 
for (int i=0; i<trainData.rows; i++)
{
    Mat projected = lda.project(feature_row);
    features.push_back(projected);
}

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
 double d = norm(features.row(i), pro_tag);
 if (bestDist < d)
 {
       bestDist = d;
       bestId = i;
 }

} int predicted = labels[bestId]; // there we are ! ;)

;)

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Mat features; 
for (int i=0; i<trainData.rows; i++)
{
    Mat projected = lda.project(feature_row);
lda.project(trainData.row(i));
    features.push_back(projected);
}

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
    double d = norm(features.row(i), pro_tag);
    if (bestDist < d)
    {
          bestDist = d;
          bestId = i;
    }
}
int predicted = labels[bestId]; // there we are ! ;)

Also, when I run "lda.compute(trainData, trainLabels);" does this assume that my different classes are in rows rather than columns?

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Mat features; 
for (int i=0; i<trainData.rows; i++)
{
    Mat projected = lda.project(trainData.row(i));
    features.push_back(projected);
}
Mat features = lda.project(trainData);

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
    double d = norm(features.row(i), pro_tag);
    if (bestDist < d)
    {
          bestDist = d;
          bestId = i;
    }
}
int predicted = labels[bestId]; // there we are ! ;)

Also, when I run "lda.compute(trainData, trainLabels);" does this assume that my different classes are in rows rather than columns?

How to perform Linear Discriminant Analysis with OpenCV

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors
Mat features = lda.project(trainData);

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
    double d = norm(features.row(i), pro_tag);
    if (bestDist < d)
    {
          bestDist = d;
          bestId = i;
    }
}
int predicted = labels[bestId]; // there we are ! ;)

Also, when I run "lda.compute(trainData, trainLabels);" does this assume that my different classes are in rows rather than columns?

Update 3

void train()
{
cv::Mat initial_image = cv::imread("/Users/u5305887/Desktop/tags/I/0.jpg", 0);
cv::Mat trainData = initial_image.reshape (1, 1);

for (int i=1; i < 939; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/I/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

for (int i=0; i < 978; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/O/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

for (int i=0; i < 459; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/Q/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

cv::Mat trainLabels = (Mat_<int>(1,2376) << 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3);
int C = 3; // 3 tag types
int num_components = (C-1);

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors
Mat projected = lda.project(trainData);
}

// later when I have a tag I need to classify

cv::Mat roi; // tag to classify
cv::Mat roi_flat = roi.reshape(1,1);
Mat proj_tag = lda.project(roi_flat)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<projected.rows; i++)
{
    double d = cv::norm( projected.row(i), proj_tag);
    if (bestDist < d)
    {
        bestDist = d;
        bestId = i;
    }
}
int predicted = labels.at<int>(bestId);

How to perform Linear Discriminant Analysis with OpenCV

Final Update: Based on the help berak gave me, I've written a new question including all the code that he has helped me with in one place, but also with the aim to calculate the probability when classifying instead of finding the nearest datapoint.

I recently tested out scikit-learn's LDA with some different images and could see clear clusters form. Now I want to translate that code into C++ for my main program I was wondering if anyone had any knowledge/experience working with the OpenCV library doing things like Eigenfaces or Fisherfaces. I'm particularly interested in whether I can use LDA directly without having to use one of the pre-written facial recognition libraries.

Update Thank-you berak for your amazing help and great examples in the answer. I hope it would be ok if I double checked a few things that I'm a little confused about?

So if I have my training data set up like this:

Mat trainData; // 256 cols (flat 16*16 tags)  and x thousand rows (each tag)
Mat trainLabels; // 1D matrix of class labels e.g. 1, 2, 1, 1, 3, 3

int C = 3; // 3 tag types
int num_components = (C-1);

Then I initialise the LDA:

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors

Next, I need to get the mean, eigenvectors and projections like you suggested. In your comment above you stated how lda.compute computes the eigenvectors, so does this mean I can retrieve the eigenvectors with this command?

Mat eigenvectors = lda.eigenvectors();

I'm still a little confused as to how I retrieve the mean and also where does feature_row in this code come from?

Mat projected = lda.project(feature_row); // project feature vecs, then compare in lda-space

Once I now have the Mat Projected matrix, the mean and eigenvectors, I then use this bit of your code to get the features Matrix

Mat features; 
    for (int i=0; i<trainData.rows; i++)
    {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, trainData.row(i));
        features.push_back( proj );
    }
    labels = trainLabels;
 }

Now I have this training done, can I use the function you wrote below to pass a new 1D tag matrix (that's what Mat feature is right?) and predict what type it is?

int predict(Mat &feature)
 {
        Mat proj = LDA::subspaceProject(eigenvectors, mean, feature);
        // compare to pre-projected train feature.row(i),
        // return id of item with shortest distance
 }

So the final step is for me to take the new tag (feature) and then to iterate through each row of the features matrix I created during the training step, and find the item with the shortest distance and return it's label. Will the data be in x, y coordinate format or is there another way I should try to find the shortest distance?

Update 2 Thanks so much for the clarification, I think I understand, is this correct?

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors
Mat features = lda.project(trainData);

Then when I want to predict I take my 1D matrix from the new tag (Mat new_tag):

Mat new_tag; // 1D tag to classify
Mat proj_tag = lda.project(new_tag)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<features.rows; i++)
{
    double d = norm(features.row(i), pro_tag);
    if (bestDist < d)
    {
          bestDist = d;
          bestId = i;
    }
}
int predicted = labels[bestId]; // there we are ! ;)

Also, when I run "lda.compute(trainData, trainLabels);" does this assume that my different classes are in rows rather than columns?

Update 3

void train()
{
cv::Mat initial_image = cv::imread("/Users/u5305887/Desktop/tags/I/0.jpg", 0);
cv::Mat trainData = initial_image.reshape (1, 1);

for (int i=1; i < 939; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/I/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

for (int i=0; i < 978; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/O/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

for (int i=0; i < 459; i++)
{
    std::string filename = "/Users/u5305887/Desktop/tags/Q/";
    filename = filename + std::to_string(i);
    filename = filename + ".jpg"
    cv::Mat image = cv::imread(filename, 0)
    cv::Mat flat_image = image.reshape(1,1);
    trainData.push_back(flat_image);
}

cv::Mat trainLabels = (Mat_<int>(1,2376) << 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3);
int C = 3; // 3 tag types
int num_components = (C-1);

LDA lda(num_components); 
lda.compute(trainData, trainLabels); // compute eigenvectors
Mat projected = lda.project(trainData);
}

// later when I have a tag I need to classify

cv::Mat roi; // tag to classify
cv::Mat roi_flat = roi.reshape(1,1);
Mat proj_tag = lda.project(roi_flat)

int bestId = -1;
double bestDist = 999999999.9;
for (int i=0; i<projected.rows; i++)
{
    double d = cv::norm( projected.row(i), proj_tag);
    if (bestDist < d)
    {
        bestDist = d;
        bestId = i;
    }
}
int predicted = labels.at<int>(bestId);