Ask Your Question

BloodyKeyboard's profile - activity

2019-06-17 08:14:40 -0600 received badge  Popular Question (source)
2016-04-18 02:28:55 -0600 received badge  Supporter (source)
2016-04-13 10:13:41 -0600 received badge  Scholar (source)
2016-04-13 10:13:05 -0600 commented answer Why does LDA reduction give only one dimension for two classes?

Slide 3 of the presentation is not representative of how LDA should work but slide 4 is.

Thank you for the advice, I going to try it. And thank you for your answers, those were very usefull. Since my question about LDA has been answered, I'm closing this thread.

2016-04-12 10:30:57 -0600 commented answer Why does LDA reduction give only one dimension for two classes?

It seems I had misunderstood LDA then. If I'm correct this time, LDA will always project data on (a) line(s). I thought it would be able to project it on a subspace (not especially on a 1D space).

Here is what I would like to accomplish: "Reducting data by projecting it on a subspace (like PCA do) but using the Fisher linear discriminant."

Maximising data variance without relying on class labels works. However, depending on the situations using the Fisher discriminant might be a better solution to find a projection subspace especially since it takes advantage of the class labels.

It seems I shouldn't use LDA but only the same principle (Fisher discriminant) to reduce data. Do you know if any algorithms fitting this requirement might already have been implemented in openCV?

2016-04-11 10:33:21 -0600 commented question Why does LDA reduction give only one dimension for two classes?

From what I could read LDA could be used as reduction method as well ( e.g. here).

Wikipedia also seems to say it:

The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

(source)

I'm not interested in using the PCA method, I already have and would like to compare it to LDA reduction.

2016-04-11 10:25:52 -0600 commented answer Why does LDA reduction give only one dimension for two classes?

I might have misunderstood how this works then. Please, correct me if I'm wrong: In both cases (PCA-LDA), the algorithm tries to find a projection in order to reduce data as effectivly as possible. From what I could read (e.g. here) LDA can be directly used to reduce data and might be more accurate than PCA.

What you're telling surpises me. The returned vector doesn't seem containing probabilities. Here is an example of returned vector after projection:

[1063.72663796166, 1100.15457383534, ..., -1102.669283385719, -1072.086030509124]

Are you sure the lda.project(..) function is used to classify data ?

I'm positive, I already have used PCA reduction and would like to compare it to LDA reduction.

2016-04-11 08:48:22 -0600 received badge  Editor (source)
2016-04-11 08:12:44 -0600 asked a question Why does LDA reduction give only one dimension for two classes?

Hi,

I'm trying to reduce data from two classes with the Linear Discriminant Analysis algorithm (LDA opencv documentation here).

Here is a short example of what I'm trying to accomplish:

LDA lda(num_components);
lda.compute(someData, classesLabels); //Computes LDA algorithm to find the best projection
Mat reductedData = lda.project(someData); //Reduces input data

Let's say I've 100 dimensions per sample as input and I want to get 50 after reduction. If I'm correctly understanding the documentation (here), num_components should be the number of kept dimensions.

However I'm obtaining only one dimension regardless of the number I give to the LDA constructor. I looked at the LDA source code (here) which explains this behaviour :

...
// number of unique labels
int C = (int)num2label.size();
...
...
// clip number of components to be a valid number
if ((_num_components <= 0) || (_num_components > (C - 1))) {
    _num_components = (C - 1);
}
...
_eigenvalues = Mat(_eigenvalues, Range::all(), Range(0, _num_components));
_eigenvectors = Mat(_eigenvectors, Range::all(), Range(0, _num_components));

Here are my questions:

  • The behaviour in the documentation and the code seem to be different, is it normal ? If so, could someone explain why the number of output dimensions should be linked to the number of classes ?
  • How should I proceed to have more than one dimension with two classes ?