Explaination about LDA class / methods
Hello,
I'm about to start a project using LDA classification. I have started reading about opencv and the LDA class, but they are still some grey areas for me compare to the theory I have read here and the associated example (1) :
- I was expected that the LDA algorithm would give me discriminant functions for each classes I have trained. That way, I could have used them to predict the class of my testing data, but it seems that the outputs are eigenVectors / Values. How can I use it ?
I have seen on this thread (2) that they perform a classic L2 norm to find the closest neighbour in the lda-subspace to predict, but I can't find any theory explanation about LDA talking about this part.
- My other point is about the processing of the LDA class. The main processing start line 986, here (3), and I can't see any covariance matrix, which seems to be a main operation in LDA processing (sorry if I missed it, opencv annotation is totally new for me).
If anyone could enlight me about how to use this LDA :) Thank you !
Etienne
LINKS : (sorry for removing the 'http' part, I've no right for direct links...)
(1) : people.revoledu.com/kardi/tutorial/LDA/LDA.html
(1) : people.revoledu.com/kardi/tutorial/LDA/Numerical%20Example.html
(2) : answers.opencv.org/question/64165/how-to-perform-linear-discriminant-analysis-with-opencv/
(3) : github.com/opencv/opencv/blob/master/modules/core/src/lda.cpp
" and I can't see any covariance matrix" -- the "within class scatter Matrix" (Sw, ~line 1066) is the covariance you're looking for.
Thank you berak, that's true, sorry I missed that...
" LDA algorithm would give me discriminant functions" -- would you explain that (or, the difference) ?
imho, it works pretty much like opencv's PCA model,
you throw a lot of data (and labels, in the LDA case) at it to calculate a projection matrix, and later project your features into the resp. eigenspace. (for LDA, this space has (numclasses-1) elements.)
then you classify those, compressed vectors.