4

So I noticed that I get different answers for the eigendecomposition of the 4x4 matrix of all 1s.

In Python using numpy.linalg.eig:

matrix = numpy.ones((M,M), dtype=float);
values, vectors = numpy.linalg.eig(matrix);

Python Result:

V1: [-0.866025 +0.288675 +0.288675 +0.288675]
V2: [+0.500000 +0.500000 +0.500000 +0.500000]
V3: [+0.391955 +0.597433 -0.494694 -0.494694]
V4: [+0.866025 -0.288675 -0.288675 -0.288675]

In C Using LAPACK DSYEV:

#define NN 4
#define LDA NN
void main(){
    int n = NN, lda = LDA, lwork=NN*NN*NN*NN*NN, info;
    char both = 'V';
    char uplo = 'U';
    double w[NN*NN];
    double work[NN*NN*NN*NN*NN];
    double a[LDA*NN] = {
       1,  1,  1,  1,
       1,  1,  1,  1,
       1,  1,  1,  1,
       1,  1,  1,  1 
    };

    dsyev_(&both, &uplo, &n, a, &lda, w, work, &lwork, &info);
    return;
}

C DSYEV Result:

V1: +0.000596 +0.000596 -0.707702 +0.706510 
V2: +0.500000 +0.500000 -0.499157 -0.500842 
V3: +0.707107 -0.707107 -0.000000 +0.000000 
V4: +0.500000 +0.500000 +0.500000 +0.500000  

In C Using LAPACK DGEEV:

#define NN 4
#define LDA NN
#define LDVL NN
#define LDVR NN
void main() {
    char compute_left = 'V';
    char compute_right = 'V';
    int n = NN, lda = LDA, ldvl = LDVL, ldvr = LDVR, info, lwork=2*NN*NN;
    double work[2*NN*NN];
    double wr[NN], wi[NN], vl[LDVL*NN], vr[LDVR*NN];
    double a[LDA*NN] = {
       1,  1,  1,  1,
       1,  1,  1,  1,
       1,  1,  1,  1,
       1,  1,  1,  1 
    };
    dgeev_( &compute_left, &compute_right, &n, a, &lda, wr, wi, vl, &ldvl, vr, &ldvr, work, &lwork, &info );
    return;
}

C DGEEV Result:

V1: -0.866025 +0.288675 +0.288675 +0.288675 
V2: -0.500000 -0.500000 -0.500000 -0.500000 
V3: -0.000000 -0.816497 +0.408248 +0.408248 
V4: -0.000000 -0.000000 -0.707107 +0.707107 

The results are all different!

So I have two major questions:

  1. Why? Is this due to degeneracy in the 1 matrix?
  2. How can I replicate the results of Python In C?

Any insight would be appreciated.

The Dude
  • 661
  • 2
  • 11
  • 20
  • The results seem to be too much off that it is a matter of numeric instability of the libraries. Maybe it expects certain matrices to have been initialized to zero or some other value? – Paul Ogilvie Dec 28 '16 at 15:40
  • @PaulOgilvie I just tried initializing all the relevant arrays to zero beforehand, and I still get the same answers. – The Dude Dec 28 '16 at 15:49

2 Answers2

3

All are correct. Your matrix has two eigenvalues, 4 and 0. The eigenspace for 4 is the line spanned by [1,1,1,1], a multiple of which shows up in all lists. The eigenspace for 0 is the 3-space x_1 + x_2 + x_3 + x_4 = 0. The three methods have each given you a different basis for this subspace—except numpy, which only gave you vectors spanning a two-dimensional subspace, for some reason.

To my mind, the results of DGEEV are the best of the ones you report, since they give an orthonormal basis for the 0-eigenspace in a sensible stairshape form.

Nick Matteo
  • 4,453
  • 1
  • 24
  • 35
  • I figured it was something like this. Is there a way I can get the same basis in C as numpy gives me? – The Dude Dec 28 '16 at 15:50
  • @TheDude: Not that I know of. There are infinitely many possible bases and no particular reason for the implementation to pick any given one. – Nick Matteo Dec 28 '16 at 16:40
  • @TheDude: Actually, come to notice, the result numpy gives isn't a basis. V4 is just negative V1 in the results you gave. On my system, it repeats one vector twice without even a sign change. – Nick Matteo Dec 28 '16 at 22:09
2

Three of the four eigenvalues are 0 (try printing out values in your Python script). Because all of the elements of the matrix are identical (ones), any vector where the elements add to zero will be a valid eigenvector corresponding to a zero eigenvalue. Exactly how this vector is chosen is not important, so the fact that different software find different eigenvectors for the zero eigenvalues are not important. You should confirm that these eigenvectors indeed have elements which add up to 0.

The last eigenvalue is 4 (non-zero), which imply that the corresponding eigenvector must have identical (non-zero) elements. The exact value of these elements is then depending on the normalization of the eigenvectors, which also seem to differ in your examples.

All in all, everything is really OK, it is just that the eigenvectors of your matrix are very non-unique. Another solution, which I find more pleasing, is found by Wolfram Alpha.

Update

Generally, a M by M matrix has M eigenvalues. If two (or more) of these are identical, there exist an infinite number of possible realisations of the corresponding eigenvectors, because from the two (call them v1 and v2), we can construct a new one by v3 = a*v1 + b*v2, where a and b are arbitrary constants.

jmd_dk
  • 12,125
  • 9
  • 63
  • 94
  • Indeed, this is the answer. But how can I assure that I get the same basis across platforms? Or is there no way without delving into the guts of the implementations? – The Dude Dec 28 '16 at 15:55
  • You can normalize the eigenvectors as you want after you get them. This way you can get the same eigenvector for eigenvalue 4 regardless of the implementation. I don't know of any way to get the three others to agree regardless of implementation. – jmd_dk Dec 28 '16 at 15:59