Attention! This forum will be made read-only by Dec-20. Please migrate to https://forum.opencv.org. Most of existing active users should've received invitation by e-mail.
Ask Your Question

Revision history [back]

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem:

Av = Bv

Now let us multiplly it with a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to largely expanded errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems. Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab. I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem:problem as:

Av = Bv

Now let us multiplly it with a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to largely expanded errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems. Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab. I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiplly it with a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to largely expanded large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems. Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab. I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiplly it with a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiplly it with multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So your matrix has to be symmetric, else you are calculating some random stuff and the same goes for a Singular Value Decomposition. I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I bet the condition number of your matrix is close to your machine eps...

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So your matrix has to be symmetric, else you are calculating some random stuff and the same goes for a Singular Value Decomposition. I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I bet the condition number of your matrix is close to your machine eps...

Matlab.

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

This doesn't apply to your situation, as your matrix is symmetric. I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So your the given matrix has to be symmetric, else you are calculating some random stuff aren't doing an eigenvalue decomposition and the same (same goes for a Singular Value Decomposition. Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

This doesn't apply to your situation, as your matrix is symmetric. I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So the given matrix has to be symmetric, else you aren't doing an eigenvalue decomposition and the (same goes for a Singular Value Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcond function. See if rcond (reciprocal condition number) yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

This doesn't apply to your situation, as your matrix is symmetric. I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So the given matrix has to be symmetric, else you aren't doing an eigenvalue decomposition and the (same goes for a Singular Value Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the rcondcond function. See if rcond (reciprocal condition number) yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

Just as a side note, usually for a given matrix X the log10(cond(X)) gives you the number of decimal places that you may lose due to roundoff errors. The IEEE standard for double precision numbers has 16 decimal digits, and let us assume your matrix has a condition number of 10^12, then you should expect only 4 digits to be accurate in your result.

I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on.

First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors):

Let's recall the eigenvalue problem as:

Av = Bv

Now let us multiply a constant c and we end up having:

A(cv) = c(Av) = c(Bv) = B(cv)

So v is an eigenvector and cv is an eigenvector as well.

This The following doesn't apply to your situation, as your matrix is symmetric. symmetric, I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So the given matrix has to be symmetric, else you aren't doing an eigenvalue decomposition and the (same goes for a Singular Value Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories:

Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at:

The page is explicitly mentioning eigenvalue problems.

Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the cond function. See if rcond (reciprocal condition number) yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab.

Just as a side note, usually for a given matrix X the log10(cond(X)) gives you the number of decimal places that you may lose due to roundoff errors. The IEEE standard for double precision numbers has 16 decimal digits, and let us assume your matrix has a condition number of 10^12, then you should expect only 4 digits to be accurate in your result.