Comparison with numpy.linalg.eig and scipy.linalg.eig
In [1]:
import numpy as np
import scipy.linalg
import mpmath as mp
from eigen_markov import stoch_eig
We consider $2 \times 2$ Markov transition matrices:
In [2]:
def P(x, y):
return np.array([[1-x, x], [y, 1-y]])
Simple example:
In [3]:
P0 = P(0.4, 0.2)
print P0
First, let np.linalg.eig
compute the left eigenvectors:
In [4]:
vals0, vecs0 = np.linalg.eig(P0.T)
print vals0
print vecs0
The second column corresponds to the dominant eigenvalue, which is $1$:
In [5]:
vecs0[:, 1] / sum(vecs0[:, 1])
Out[5]:
Next, let scipy.linalg.eig
compute the left eigenvectors:
In [6]:
vals1, vecs1 = scipy.linalg.eig(P0, left=True, right=False)
print vals1
print vecs1
Again, the second column corresponds to the dominant eigenvalue:
In [7]:
vecs1[:, 1] / sum(vecs1[:, 1])
Out[7]:
Finally, let stoch_eig
compute the left eigenvector that belongs to eigenvalue $1$:
In [8]:
mp.mp.stoch_eig(P0)
Out[8]:
In [9]:
def stoch_eig_by_numpy(P):
vals, vecs = np.linalg.eig(P.T)
indeces = [0, 1]
print 'Eigenvalues:', vals
if vals[0] > vals[1]: ineq = '>'
elif vals[0] < vals[1]: ineq = '<'
else: ineq = '=='
print 'vals[0] {0} vals[1]'.format(ineq)
print 'Dominant eigenvector :', vecs[:, indeces.pop(np.argmax(vals))]/sum(vecs[:, np.argmax(vals)])
print 'The other eigenvector:', vecs[:, indeces[0]]/sum(vecs[:, indeces[0]])
In [10]:
stoch_eig_by_numpy(P0)
In [11]:
def stoch_eig_by_scipy(P):
vals, vecs = scipy.linalg.eig(P, left=True, right=False)
indeces = [0, 1]
print 'Eigenvalues:', vals
if vals[0] > vals[1]: ineq = '>'
elif vals[0] < vals[1]: ineq = '<'
else: ineq = '=='
print 'vals[0] {0} vals[1]'.format(ineq)
print 'Dominant eigenvector :', vecs[:, indeces.pop(np.argmax(vals))]/sum(vecs[:, np.argmax(vals)])
print 'The other eigenvector:', vecs[:, indeces[0]]/sum(vecs[:, indeces[0]])
In [12]:
stoch_eig_by_scipy(P0)
We consier P(x, x/2) for x very small:
In [13]:
xs = [float('1e-'+str(i)) for i in range(14, 19)]
print xs
For example, if x = 1e-14:
In [14]:
print P(1e-14, 1e-14/2)
Verify that the left eigenvalues are $1$ and $1 - \frac{3}{2} x$.
Exhibits strange behavior for x <= 1e-16:
In [15]:
for x in xs:
print 'x =', x
stoch_eig_by_numpy(P(x, x/2))
Exhibits strange behavior for x <= 1e-16:
In [16]:
for x in xs:
print 'x =', x
stoch_eig_by_scipy(P(x, x/2))
Always returns the correct answer, but this is just because the algorithm implemented does not use the actual values of the diagonal entries, under the assumption that rows sum to one:
In [17]:
for x in xs:
print 'x =', x
print mp.mp.stoch_eig(P(x, x/2))
In [18]:
for x in xs:
print 'x =', x
print mp.fp.stoch_eig(P(x, x/2))
So let us consider Q(1-x, 1-x/2) with the following, instead of P(x, x/2):
In [19]:
def Q(x, y):
return np.array([[x, 1-x], [1-y, y]])
Mathematically, P(x, x/2) == Q(1-x, 1-x/2), but numerically, it is not necessarily the case:
In [20]:
for x in xs:
print 'x =', x
print mp.mp.stoch_eig(Q(1-x, 1-x/2))
For x = 1e-16, 1 - (1 - x/2) is evaluated as 0, while for x <= 1e-17, 1 - (1 - x) is evaluated as 0 as well, so that for these cases, wrong answers are obtained.
It is here that the functionality of mpmath has a bite; let us set a high enough precision and compute again:
In [21]:
print mp.mp
In [22]:
xs = [mp.mpf('1e-'+str(i)) for i in range(14, 19)]
for x in xs:
with mp.workdps(30):
v = mp.mp.stoch_eig(Q(1-x, 1-x/2))
print 'x =', x
print v
We have got the correct answer.
Of course, for this simple case, one should reduce 1 - (1 - x) to x by hand. But in more complex cases, it will not be possible, and it is necessary to set a precision appropriately.
In [23]:
%timeit np.linalg.eig(P0.T)
%timeit scipy.linalg.eig(P0, left=True, right=False)
%timeit mp.mp.stoch_eig(P0)
%timeit mp.fp.stoch_eig(P0)
In [24]:
import platform
platform.platform()
Out[24]:
In [25]:
import sys
print sys.version
In [26]:
np.__version__
Out[26]:
In [27]:
scipy.__version__
Out[27]:
In [28]:
mp.__version__
Out[28]:
In [28]: