'numpy.eigh and matlab give inconsistent answers for eigenvectors?
I'm writing a code that diagonalizes a 4x4 hermitian matrix. It's a simple enough code but the eigenvectors given by matlab and numpy disagree wildly.
Reproducible code:
import numpy as np
# symbreak_h_bound generates a 4x4 matrix based on input kpt, then returns the eigenvals and eigenvecs
w, v = symbreak_h_bound([np.pi/600, np.pi/600])
def symbreak_h_bound(kpt, m=0.1):
# This function returns W, V corresponding to eigenvalues and eigenvectors of Hbound
# bunch of constants, ignore. necessary to show the problem
ghoverG = 1./2.
kx = kpt[0]
ky = kpt[1]
sinkx = 2 * np.sin(2 * kx)
sinky = 2 * np.sin(2 * ky)
gamma = 2 * np.sin(2 * kx + 2 * np.pi * ghoverG)
betax = 1j * np.exp(-2j * kx) * m
betay = 1j * np.exp(-2j * ky) * m
alpha = np.exp(1j * 2 * np.pi * ghoverG)
# Hbound is hermitian, np.allclose(Hbound, Hbound.H) returns True
Hbound = np.matrix(
[
[0, sinky + betay, sinkx + betax, 0],
[sinky + np.conjugate(betay), 0, 0, gamma + np.conjugate(alpha) * betax],
[sinkx + np.conjugate(betax), 0, 0, sinky + betay],
[0, gamma + alpha * np.conjugate(betax), sinky + np.conjugate(betay), 0],
]
)
w, v = lin.eigh(Hbound)
return w, v
Straightforward, but here's the issue - matlab and numpy give wildly different answers for the eigenvectors of the almost exactly same matrix. Here is the numpy output -
Here's the matlab output -
We can see that Hbound is almost the same as M, which is reflected in the fact that w == D1. But v != V1.
I know that if v is an eigenvector, then $$e^{i \theta} * v$$ is also an eigenvector. However, it is still not reflective of the situation, as v[:,i] are not even exactly orthogonal to each other (upto floating point errors).
My code relies on the continuity of eigenvectors for kpt -> kpt+delta, so I was wondering if someone could suggest the reason for inconsistency between numpy and matlab here, but more importantly, if there is a way to ensure a continuous eigenvector v -> v+delta_v corresponding to a slight change when moving from kpt -> kpt + delta?
Solution 1:[1]
The v in your example is quite close to orthogonal.
> v.H*v
matrix([[ 1.00000000e+00 +0.00000000e+00j,
2.49800181e-16 +0.00000000e+00j,
-1.11022302e-16 +4.16333634e-17j,
-1.94289029e-16 +3.46944695e-17j],
[ 2.49800181e-16 +0.00000000e+00j,
1.00000000e+00 +0.00000000e+00j,
-1.11022302e-16 +8.32667268e-17j,
-1.94289029e-16 +3.46944695e-17j],
[ -1.11022302e-16 -4.16333634e-17j,
-1.11022302e-16 -8.32667268e-17j,
1.00000000e+00 +0.00000000e+00j,
1.11022302e-16 -6.93889390e-18j],
[ -1.94289029e-16 -3.46944695e-17j,
-1.94289029e-16 -3.46944695e-17j,
1.11022302e-16 +6.93889390e-18j,
1.00000000e+00 +0.00000000e+00j]])
Are you looking at v.transpose()*v perhaps?
The first two eigenvectors do span the same subspace as Matlab's first two eigenvectors, and the same for the second two. There's obviously some freedom in the basis for the subspace because of the repeated eigenvalues.
I'm not sure about the choices eigh makes for the representation of a subspace corresponding to a repeated eigenvalue when you perturb the problem. I doubt there's a guarantee that it'll be continuous, even if the subspace itself behaves nicely.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | bg2b |


