Ask Your Question
1

Mismatch of the Eigen Vector & Value

asked 2013-04-05 13:33:00 -0600

Alexandre Bizeau gravatar image

updated 2013-04-15 13:32:41 -0600

Hello,

I need to retrieve the eigen vector from a matrix. The problem is I don't match the result same as my matlab code.

I have a matrix symmetric 100x100 and trying to obtain eigen value and eigen vector from it. I work with double matrix (CV_64F) to have the best precision as possible (Already tried Float and it fail more).

My eigen value seems good but the vector loses some accuracy for each value ( Ex : value 1 to 25 exactly match with matlab but further you get to 100 , more I am losing precision, but I can work with that.)

But the problem is more with the eigen vector. The result is same as eigen value, lose precision but the problem is with the sign. If I take the firsts 25x25 results. I'm totally matching with matlab but randomly have positive or negative value. So I got wrong information at the end.

Right now I'm using cv:eigen function like this :

cv::eigen(oResultMax,oMatValue,oMatVector);

I have already tried the SelfAdjointSolver from the library Eigen. Eigen SelfAdjointSolver

Anyone have an idea ? Or suggest me something ?

EDIT : (Add image of result)

Here you can see the image of the eigen value. If you refere to the MatLab result, the double result are almost same till the 27th and for the float till 14th.

image description

And those Image, you have Eigen Vector in order : MatLab , OpenCV Double and OpenCV Float.

As you can see double matching until 35th and float until 15th. And the sign of the result are differente from Matlab, Double and Float

image description image description image description

edit retag flag offensive close merge delete

Comments

1

Try doing a SVD with cv::SVD::compute, because for a symmetric matrix the singular values in W are square roots of eigenvalues.

Philipp Wagner gravatar imagePhilipp Wagner ( 2013-04-06 15:32:08 -0600 )edit

Thanks Philipp, the problem is because I need to get Eigen Vector. I know with eigen values, I can obtain the vector but more processing. But can be an idea to try it.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-08 08:08:21 -0600 )edit

I tried the SVD function and in same time, I tried PCA one (Suggested Eigen reference). My result for SVD are same as Eigen losing accuracy and sign problem. With PCA, the result are really not matching and looks more bad than my float with eigen. Both function were a good idea to try but they are not the answer to my problem. Thanks Philipp.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-08 10:59:56 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
3

answered 2013-04-13 13:02:23 -0600

updated 2013-04-16 16:40:21 -0600

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

The following doesn't apply to your situation, as your matrix is symmetric, I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So the given matrix has to be symmetric, else you aren't doing an eigenvalue decomposition and the (same goes for a Singular Value Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the cond function. See if rcond (reciprocal condition number) yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

Just as a side note, usually for a given matrix X the log10(cond(X)) gives you the number of decimal places that you may lose due to roundoff errors. The IEEE standard for double precision numbers has 16 decimal digits, and let us assume your matrix has a condition number of 10^12, then you should expect only 4 digits to be accurate in your result.

edit flag offensive delete link more

Comments

@Philipp Wagner : Thank you for your time. Right now, I have found some implementation of eig in MatLab in some toolbox and I just trying to see if there a lot of difference between the built-in and those function. Stay alert, I'll post an edit on my answer with more details about my result and every thing.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-15 12:58:11 -0600 )edit

I tought my accuracy problem were maybe about something like that ( Condition number). I read the article you give me and I try it. So my 100x100 is K. It the one I need to do rcond on it right ? If yes, rcond result is really bad : 6.8180e-020 and my MatLab eps is : 2.2.04e-016, so I'm really near 0.0 then my matrix(K) is really ill-conditioned. So what can I do ? I will continue to read about it because it can be a good point and maybe this is why I have sign problem and same for accuracy. Working on it ! Thanks you.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-16 09:31:04 -0600 )edit
1

answered 2013-04-05 14:03:21 -0600

Basically a lot of functions of openCV use the CV_32F type. I am guessing since you are pushing CV_64F data, that it gets reduced and thus it looses precision.

Could you see if you perform the eigenvalue and vector approach on float data in matlab, and not double if the results still differ alot?

edit flag offensive delete link more

Comments

When I'm using float, I'm facing the same problem, sign problem with the eigen vector and bad precission on the eigen value. But It start losing accuracy at like value 9 instead of 25-30 and at value 100 with double I have a difference of 0.000001 and I have 0.01 with float. And Double - Float - Matlab have different sign result. Double and Float sign not matching either.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-05 14:18:34 -0600 )edit

And I never tried to used float in Matlab, but if you think it can be a good, I can try it now.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-05 14:21:09 -0600 )edit
2

Basically if you really want to know where the problem is, you should compare implementation details. Probably there is something different in the basic implementation.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-04-05 14:34:04 -0600 )edit

I won't be able to found the Matlab eig function because it's a Built-in function. And I can't be sure the octave one can match. But I will try to work on that. And maybe try the SVD like Philipp said. Thanks.

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-08 08:11:06 -0600 )edit
1

@StevePuttemans I really want to see you going through the implementation details of an eigenvalue solver. Most of the stuff is mathematical magic to me and I don't think I would find the difference in the implementation. I guess looking at the data I feed into a solver should be the very first thing to look at. And I tend rely on the fact, that the algorithms implemented by BLAS (and associated projects) have been used by millions of mathematicans, that would have spotted errors. I don't think the OpenCV project has invented a new solver, but used the BLAS implementations.

Philipp Wagner gravatar imagePhilipp Wagner ( 2013-04-13 13:25:24 -0600 )edit

Ever heard of the word 'comment'. I did not tell you it would be easy, far from it. I hardly get the math behind eigenvalues myself, but it can be done. Just need patience and dedication. It is the only way people will find the actual difference :)

StevenPuttemans gravatar imageStevenPuttemans ( 2013-04-13 16:06:13 -0600 )edit
2

@StevenPuttemans I went to OpenCV code and figured out the implementation has been create for double and float. And using the function JacobiImpl_ in modules\core\src\Lacpack.cpp. So I don't think the problem is that. But maybe @Philipp Wagner is right because the function start with eps = std::numeric_limits<>::epsilon(). But C++ EPS is equal to MatLab. I still working on this. Ty

Alexandre Bizeau gravatar imageAlexandre Bizeau ( 2013-04-16 14:31:54 -0600 )edit

I know that this post is really really old, but... have you figure it out? I'm facing the same problem and unfortunately I've not much time to further investigate. I've a symmetric matrix but still getting different result from Octave especially for the tiny number

HYPEREGO gravatar imageHYPEREGO ( 2020-02-29 15:08:10 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2013-04-05 13:33:00 -0600

Seen: 4,682 times

Last updated: Apr 16 '13