# eigen_markov: Illustration 3

Comparison with numpy.linalg.eig and scipy.linalg.eig



In [1]:

import numpy as np
import scipy.linalg
import mpmath as mp
from eigen_markov import stoch_eig



We consider $2 \times 2$ Markov transition matrices:



In [2]:

def P(x, y):
return np.array([[1-x, x], [y, 1-y]])



Simple example:



In [3]:

P0 = P(0.4, 0.2)
print P0




[[ 0.6  0.4]
[ 0.2  0.8]]



First, let np.linalg.eig compute the left eigenvectors:



In [4]:

vals0, vecs0 = np.linalg.eig(P0.T)
print vals0
print vecs0




[ 0.4  1. ]
[[-0.70710678 -0.4472136 ]
[ 0.70710678 -0.89442719]]



The second column corresponds to the dominant eigenvalue, which is $1$:



In [5]:

vecs0[:, 1] / sum(vecs0[:, 1])




Out[5]:

array([ 0.33333333,  0.66666667])



Next, let scipy.linalg.eig compute the left eigenvectors:



In [6]:

vals1, vecs1 = scipy.linalg.eig(P0, left=True, right=False)
print vals1
print vecs1




[ 0.4+0.j  1.0+0.j]
[[-0.70710678 -0.4472136 ]
[ 0.70710678 -0.89442719]]



Again, the second column corresponds to the dominant eigenvalue:



In [7]:

vecs1[:, 1] / sum(vecs1[:, 1])




Out[7]:

array([ 0.33333333,  0.66666667])



Finally, let stoch_eig compute the left eigenvector that belongs to eigenvalue $1$:



In [8]:

mp.mp.stoch_eig(P0)




Out[8]:

matrix(
[['0.333333333333333', '0.666666666666667']])




In [9]:

def stoch_eig_by_numpy(P):
vals, vecs = np.linalg.eig(P.T)
indeces = [0, 1]

print 'Eigenvalues:', vals
if vals[0] > vals[1]: ineq = '>'
elif vals[0] < vals[1]: ineq = '<'
else: ineq = '=='
print 'vals[0] {0} vals[1]'.format(ineq)

print 'Dominant eigenvector :', vecs[:, indeces.pop(np.argmax(vals))]/sum(vecs[:, np.argmax(vals)])
print 'The other eigenvector:', vecs[:, indeces[0]]/sum(vecs[:, indeces[0]])




In [10]:

stoch_eig_by_numpy(P0)




Eigenvalues: [ 0.4  1. ]
vals[0] < vals[1]
Dominant eigenvector : [ 0.33333333  0.66666667]
The other eigenvector: [  6.36905167e+15  -6.36905167e+15]




In [11]:

def stoch_eig_by_scipy(P):
vals, vecs = scipy.linalg.eig(P, left=True, right=False)
indeces = [0, 1]

print 'Eigenvalues:', vals
if vals[0] > vals[1]: ineq = '>'
elif vals[0] < vals[1]: ineq = '<'
else: ineq = '=='
print 'vals[0] {0} vals[1]'.format(ineq)

print 'Dominant eigenvector :', vecs[:, indeces.pop(np.argmax(vals))]/sum(vecs[:, np.argmax(vals)])
print 'The other eigenvector:', vecs[:, indeces[0]]/sum(vecs[:, indeces[0]])




In [12]:

stoch_eig_by_scipy(P0)




Eigenvalues: [ 0.4+0.j  1.0+0.j]
vals[0] < vals[1]
Dominant eigenvector : [ 0.33333333  0.66666667]
The other eigenvector: [  6.36905167e+15  -6.36905167e+15]



## Examples with very small entries

We consier P(x, x/2) for x very small:



In [13]:

xs = [float('1e-'+str(i)) for i in range(14, 19)]
print xs




[1e-14, 1e-15, 1e-16, 1e-17, 1e-18]



For example, if x = 1e-14:



In [14]:

print P(1e-14, 1e-14/2)




[[  1.00000000e+00   1.00000000e-14]
[  5.00000000e-15   1.00000000e+00]]



Verify that the left eigenvalues are $1$ and $1 - \frac{3}{2} x$.

### numpy.linalg.eig

Exhibits strange behavior for x <= 1e-16:



In [15]:

for x in xs:
print 'x =', x
stoch_eig_by_numpy(P(x, x/2))




x = 1e-14
Eigenvalues: [ 1.  1.]
vals[0] < vals[1]
Dominant eigenvector : [ 0.33327403  0.66672597]
The other eigenvector: [-3752.72158846  3753.72158846]
x = 1e-15
Eigenvalues: [ 1.  1.]
vals[0] > vals[1]
Dominant eigenvector : [ 0.34787795  0.65212205]
The other eigenvector: [ 812.45122444 -811.45122444]
x = 1e-16
Eigenvalues: [ 1.  1.]
vals[0] < vals[1]
Dominant eigenvector : [-0.29062231  1.29062231]
The other eigenvector: [ 1.  0.]
x = 1e-17
Eigenvalues: [ 1.  1.]
vals[0] == vals[1]
Dominant eigenvector : [ 0.41421356  0.58578644]
The other eigenvector: [ 3.86742915 -2.86742915]
x = 1e-18
Eigenvalues: [ 1.  1.]
vals[0] == vals[1]
Dominant eigenvector : [ 0.41421356  0.58578644]
The other eigenvector: [ 3.45411054 -2.45411054]



### scipy.linalg.eig

Exhibits strange behavior for x <= 1e-16:



In [16]:

for x in xs:
print 'x =', x
stoch_eig_by_scipy(P(x, x/2))




x = 1e-14
Eigenvalues: [ 1.+0.j  1.+0.j]
vals[0] < vals[1]
Dominant eigenvector : [ 0.33339254  0.66660746]
The other eigenvector: [-1442.33654404  1443.33654404]
x = 1e-15
Eigenvalues: [ 1.+0.j  1.+0.j]
vals[0] > vals[1]
Dominant eigenvector : [ 0.35008639  0.64991361]
The other eigenvector: [-24.87742252  25.87742252]
x = 1e-16
Eigenvalues: [ 1.+0.j  1.+0.j]
vals[0] < vals[1]
Dominant eigenvector : [ 0.  1.]
The other eigenvector: [ 1.81937256 -0.81937256]
x = 1e-17
Eigenvalues: [ 1.+0.j  1.+0.j]
vals[0] == vals[1]
Dominant eigenvector : [ 0.59742182  0.40257818]
The other eigenvector: [-2.41421356  3.41421356]
x = 1e-18
Eigenvalues: [ 1.+0.j  1.+0.j]
vals[0] == vals[1]
Dominant eigenvector : [ 0.58694593  0.41305407]
The other eigenvector: [-2.41421356  3.41421356]



### stoch_eig

Always returns the correct answer, but this is just because the algorithm implemented does not use the actual values of the diagonal entries, under the assumption that rows sum to one:



In [17]:

for x in xs:
print 'x =', x
print mp.mp.stoch_eig(P(x, x/2))




x = 1e-14
[0.333333333333333  0.666666666666667]
x = 1e-15
[0.333333333333333  0.666666666666667]
x = 1e-16
[0.333333333333333  0.666666666666667]
x = 1e-17
[0.333333333333333  0.666666666666667]
x = 1e-18
[0.333333333333333  0.666666666666667]




In [18]:

for x in xs:
print 'x =', x
print mp.fp.stoch_eig(P(x, x/2))




x = 1e-14
[0.333333333333  0.666666666667]
x = 1e-15
[0.333333333333  0.666666666667]
x = 1e-16
[0.333333333333  0.666666666667]
x = 1e-17
[0.333333333333  0.666666666667]
x = 1e-18
[0.333333333333  0.666666666667]



So let us consider Q(1-x, 1-x/2) with the following, instead of P(x, x/2):



In [19]:

def Q(x, y):
return np.array([[x, 1-x], [1-y, y]])



Mathematically, P(x, x/2) == Q(1-x, 1-x/2), but numerically, it is not necessarily the case:



In [20]:

for x in xs:
print 'x =', x
print mp.mp.stoch_eig(Q(1-x, 1-x/2))




x = 1e-14
[0.333333333333333  0.666666666666667]
x = 1e-15
[0.357142857142857  0.642857142857143]
x = 1e-16
[0.0  1.0]
x = 1e-17
[1.0  0.0]
x = 1e-18
[1.0  0.0]



For x = 1e-16, 1 - (1 - x/2) is evaluated as 0, while for x <= 1e-17, 1 - (1 - x) is evaluated as 0 as well, so that for these cases, wrong answers are obtained.

It is here that the functionality of mpmath has a bite; let us set a high enough precision and compute again:



In [21]:

print mp.mp




Mpmath settings:
mp.prec = 53                [default: 53]
mp.dps = 15                 [default: 15]
mp.trap_complex = False     [default: False]




In [22]:

xs = [mp.mpf('1e-'+str(i)) for i in range(14, 19)]
for x in xs:
with mp.workdps(30):
v = mp.mp.stoch_eig(Q(1-x, 1-x/2))
print 'x =', x
print v




x = 1.0e-14
[0.333333333333333  0.666666666666667]
x = 1.0e-15
[0.333333333333333  0.666666666666667]
x = 1.0e-16
[0.333333333333333  0.666666666666667]
x = 1.0e-17
[0.333333333333333  0.666666666666667]
x = 1.0e-18
[0.333333333333333  0.666666666666667]



We have got the correct answer.

Of course, for this simple case, one should reduce 1 - (1 - x) to x by hand. But in more complex cases, it will not be possible, and it is necessary to set a precision appropriately.

## Performance comparison



In [23]:

%timeit np.linalg.eig(P0.T)
%timeit scipy.linalg.eig(P0, left=True, right=False)
%timeit mp.mp.stoch_eig(P0)
%timeit mp.fp.stoch_eig(P0)




10000 loops, best of 3: 50.4 µs per loop
10000 loops, best of 3: 56.6 µs per loop
1000 loops, best of 3: 196 µs per loop
10000 loops, best of 3: 107 µs per loop




In [24]:

import platform
platform.platform()




Out[24]:

'Darwin-13.3.0-x86_64-i386-64bit'




In [25]:

import sys
print sys.version




2.7.8 (default, Jul  2 2014, 10:14:46)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)]




In [26]:

np.__version__




Out[26]:

'1.8.1'




In [27]:

scipy.__version__




Out[27]:

'0.14.0'




In [28]:

mp.__version__




Out[28]:

'0.19'




In [28]: