Ask Your Question
0

Explaination about LDA class / methods

asked 2017-03-15 12:06:12 -0600

Hello,

I'm about to start a project using LDA classification. I have started reading about opencv and the LDA class, but they are still some grey areas for me compare to the theory I have read here and the associated example (1) :

  • I was expected that the LDA algorithm would give me discriminant functions for each classes I have trained. That way, I could have used them to predict the class of my testing data, but it seems that the outputs are eigenVectors / Values. How can I use it ?

I have seen on this thread (2) that they perform a classic L2 norm to find the closest neighbour in the lda-subspace to predict, but I can't find any theory explanation about LDA talking about this part.

  • My other point is about the processing of the LDA class. The main processing start line 986, here (3), and I can't see any covariance matrix, which seems to be a main operation in LDA processing (sorry if I missed it, opencv annotation is totally new for me).

If anyone could enlight me about how to use this LDA :) Thank you !

Etienne

LINKS : (sorry for removing the 'http' part, I've no right for direct links...)

(1) : people.revoledu.com/kardi/tutorial/LDA/LDA.html

(1) : people.revoledu.com/kardi/tutorial/LDA/Numerical%20Example.html

(2) : answers.opencv.org/question/64165/how-to-perform-linear-discriminant-analysis-with-opencv/

(3) : github.com/opencv/opencv/blob/master/modules/core/src/lda.cpp

edit retag flag offensive close merge delete

Comments

" and I can't see any covariance matrix" -- the "within class scatter Matrix" (Sw, ~line 1066) is the covariance you're looking for.

berak gravatar imageberak ( 2017-03-16 04:08:16 -0600 )edit

Thank you berak, that's true, sorry I missed that...

etienne.demontalivet gravatar imageetienne.demontalivet ( 2017-03-17 11:17:45 -0600 )edit

" LDA algorithm would give me discriminant functions" -- would you explain that (or, the difference) ?

imho, it works pretty much like opencv's PCA model,

you throw a lot of data (and labels, in the LDA case) at it to calculate a projection matrix, and later project your features into the resp. eigenspace. (for LDA, this space has (numclasses-1) elements.)

then you classify those, compressed vectors.

berak gravatar imageberak ( 2017-03-17 11:33:35 -0600 )edit

1 answer

Sort by » oldest newest most voted
1

answered 2017-03-17 12:10:11 -0600

updated 2017-03-17 12:24:41 -0600

I've looked into Fisher lda today, and understood what I missed :

  • There is no discriminant function as output of fisher LDA (only a "good choice" which would be the hyperplane between projections of the two means), discriminant functions are another area of research, and there are different approaches to predict the incoming datas after training the classifier.
  • eigenvectors / values maximize the criterion function :

J = ("space"/"distance" between projected averages)² / (sum of scatters²)

which tends to represent the distance between the different classified groups. And so the eigenvectors give the directions which best separate those groups.

  • The comparison I've made between openCV and the two first links is irrelevant, because one is LDA and the other Fisher LDA....

  • Finally, as berak pointed, there is indeed a covariance matrix I did'nt see...

If anyone can approve :) ?

Etienne

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-03-15 12:06:12 -0600

Seen: 493 times

Last updated: Mar 17 '17