eigen_markov: Illustration

Add to mpmath some useful routines specialized for stochastic (or Markov) matrices


In [1]:
import mpmath as mp

In [2]:
from eigen_markov import stoch_eig, gth_solve

stoch_eig

stoch_eig returns the stochastic eigenvector (or stationary probability distribution vector) of an irreducible stochastic martix P, i.e., the solution to $x P = x$, normalized so that its 1-norm equals one.


In [3]:
P = mp.matrix([[0.9, 0.075, 0.025], [0.15, 0.8, 0.05], [0.25, 0.25, 0.5]])

In [4]:
print P


[ 0.9  0.075  0.025]
[0.15    0.8   0.05]
[0.25   0.25    0.5]

In [5]:
x = mp.mp.stoch_eig(P)

In [6]:
print x


[0.625  0.3125  0.0625]

In [7]:
print x * P


[0.625  0.3125  0.0625]

In [8]:
x*P == x


Out[8]:
True

In [9]:
type(x[0])


Out[9]:
mpmath.ctx_mp_python.mpf

Fast low-precision arithmetic:


In [10]:
P_fp = mp.fp.matrix([[0.9, 0.075, 0.025], [0.15, 0.8, 0.05], [0.25, 0.25, 0.5]])
x_fp = mp.fp.stoch_eig(P)

In [11]:
print x_fp


[0.625  0.3125  0.0625]

In [12]:
type(x_fp[0])


Out[12]:
float

Arbitrary-precision interval arithmetic:


In [13]:
P_iv = mp.iv.matrix([[0.9, 0.075, 0.025], [0.15, 0.8, 0.05], [0.25, 0.25, 0.5]])
x_iv = mp.iv.stoch_eig(P_iv)

In [14]:
print x_iv


[[0.62499999999999922284, 0.62500000000000077716]  [0.31249999999999966693, 0.31250000000000033307]  [0.062499999999999972244, 0.062500000000000055511]]

In [15]:
type(x_iv[0])


Out[15]:
mpmath.ctx_iv.ivmpf

In [16]:
print x_iv * P_iv


[[0.6249999999999990008, 0.6250000000000009992]  [0.31249999999999955591, 0.3125000000000004996]  [0.062499999999999944489, 0.062500000000000069389]]

In [17]:
for value, interval in zip(x, x_iv):
    print '{0} is in {1}:'.format(value, interval), value in interval


0.625 is in [0.62499999999999922284, 0.62500000000000077716]: True
0.3125 is in [0.31249999999999966693, 0.31250000000000033307]: True
0.0625 is in [0.062499999999999972244, 0.062500000000000055511]: True

With high precision:


In [18]:
n = 1e-100
with mp.workdps(100):
    P0 = mp.matrix([[1-3*(mp.exp(n)-1), 3*(mp.exp(n)-1)], [mp.exp(n)-1, 1-(mp.exp(n)-1)]])

print P0


[                  1.0  3.00034190211597e-100]
[1.00011396737199e-100                    1.0]

In [19]:
print mp.mp.stoch_eig(P0)


[0.25  0.75]

gth_solve

gth_solve returns the (normalized) nontrivial solution to $x A = 0$ for an irreducible transition rate martix A.


In [20]:
A = P - mp.eye(3)

In [21]:
print A


[-0.1  0.075  0.025]
[0.15   -0.2   0.05]
[0.25   0.25   -0.5]

In [22]:
y = mp.mp.gth_solve(A)

In [23]:
print y


[0.625  0.3125  0.0625]

In [24]:
print mp.chop(y * A)


[0.0  0.0  0.0]

In [25]:
print mp.norm(y*A, p=1)


2.60208521396521e-17

In [26]:
print mp.eps


2.22044604925031e-16

Other examples

Stochastic matrices arising from a certain game-theoretic model, called the Kandori-Mailath-Rob (KMR) model


In [27]:
from test_eigen_markov import KMR_Markov_matrix_sequential

In [28]:
KMR_matrices = \
[KMR_Markov_matrix_sequential(N=3, p=1./3, epsilon=1e-14),
 KMR_Markov_matrix_sequential(N=10, p=1./3, epsilon=1e-10),
 KMR_Markov_matrix_sequential(N=27, p=1./3, epsilon=1e-2)]

In [29]:
print 'KMR_matrices[0] ({0} x {1})'.format(KMR_matrices[0].rows, KMR_matrices[0].cols)
print KMR_matrices[0]


KMR_matrices[0] (4 x 4)
[0.999999999999995               5.0e-15                0.0                0.0]
[0.333333333333332   4.9960036108132e-15  0.666666666666663                0.0]
[              0.0  3.33333333333333e-15  0.666666666666665  0.333333333333332]
[              0.0                   0.0            5.0e-15  0.999999999999995]

In [30]:
print 'KMR_matrices[1] ({0} x {1})'.format(KMR_matrices[1].rows, KMR_matrices[0].cols)
mp.nprint(KMR_matrices[1], 9)


KMR_matrices[1] (11 x 4)
[1.0  5.0e-11      0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0  0.0]
[0.1      0.9  4.5e-11      0.0      0.0      0.0      0.0      0.0      0.0      0.0  0.0]
[0.0      0.2      0.8  4.0e-11      0.0      0.0      0.0      0.0      0.0      0.0  0.0]
[0.0      0.0      0.3     0.35     0.35      0.0      0.0      0.0      0.0      0.0  0.0]
[0.0      0.0      0.0      0.2      0.2      0.6      0.0      0.0      0.0      0.0  0.0]
[0.0      0.0      0.0      0.0  2.5e-11      0.5      0.5      0.0      0.0      0.0  0.0]
[0.0      0.0      0.0      0.0      0.0  3.0e-11      0.6      0.4      0.0      0.0  0.0]
[0.0      0.0      0.0      0.0      0.0      0.0  3.5e-11      0.7      0.3      0.0  0.0]
[0.0      0.0      0.0      0.0      0.0      0.0      0.0  4.0e-11      0.8      0.2  0.0]
[0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0  4.5e-11      0.9  0.1]
[0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0      0.0  5.0e-11  1.0]

In [31]:
print 'KMR_matrices[2] ({0} x {1})'.format(KMR_matrices[2].rows, KMR_matrices[2].cols)


KMR_matrices[2] (28 x 28)

In [32]:
x0 = mp.mp.stoch_eig(KMR_matrices[0])
print x0


[4.99999999999992e-15  7.49999999999992e-29  1.49999999999998e-14  0.99999999999998]

In [33]:
print x0 * KMR_matrices[0]


[4.99999999999992e-15  7.49999999999992e-29  1.49999999999998e-14  0.99999999999998]

In [34]:
print mp.norm(x0*KMR_matrices[0] - x0, p=1)


0.0

In [35]:
vecs0 = []
for P in KMR_matrices:
    vecs0.append(mp.mp.stoch_eig(P))

Errors $\lVert x P - x\rVert_1$:


In [36]:
for i, P in enumerate(KMR_matrices):
    print 'KMR_matrices[{0}]:'.format(i), mp.norm(vecs0[i]*P - vecs0[i], p=1)


KMR_matrices[0]: 0.0
KMR_matrices[1]: 1.63132611699963e-55
KMR_matrices[2]: 8.27180612649327e-25

Comparison with eig

Eigenvectors can be found by eig, a routine for the general eigenvalue problem. Here we are looking for the dominant eigenvector of a stochastic matrix.


In [37]:
def stoch_eig_by_eig(P):
    E, EL = mp.eig(mp.matrix(P), left=True, right=False)
    E, EL = mp.eig_sort(E, EL)

    dom_eigvec = EL[EL.rows-1, :]
    return dom_eigvec/sum(dom_eigvec)

In [38]:
P = mp.matrix([[0.9, 0.075, 0.025], [0.15, 0.8, 0.05], [0.25, 0.25, 0.5]])
print P


[ 0.9  0.075  0.025]
[0.15    0.8   0.05]
[0.25   0.25    0.5]

In [39]:
z = stoch_eig_by_eig(P)
print z


[0.624999999999999  0.312500000000001  0.0625000000000002]

In [40]:
print z * P


[0.624999999999999  0.312500000000001  0.0625000000000001]

In [41]:
mp.norm(z*P - z, p=1)


Out[41]:
mpf('4.5796699765787707e-16')

It is natural to suspect that the stoch_eig routine, which is specialized for stochastic matrices, performs better than the general purpose routine eig for those matrices. It seems it is indeed the case in terms of speed, and for accuracy, stoch_eig seems at least as accurate as eig.

KMR matrices


In [42]:
import time

In [43]:
v0, v1 = [], []
t0, t1 = [], []

for P in KMR_matrices:
    start = time.time()
    v = mp.mp.stoch_eig(P)
    timediff = time.time() - start
    v0.append(v)
    t0.append(timediff)

for P in KMR_matrices:
    start = time.time()
    v = stoch_eig_by_eig(P)
    timediff = time.time() - start
    v1.append(v)
    t1.append(timediff)

In [44]:
for i, P in enumerate(KMR_matrices):
    print 'KMR_matrices[{0}] ({1} x {2})'.format(i, P.rows, P.cols)
    print '  stoch_eig:', mp.norm(v0[i]*P - v0[i], 1), t0[i]
    print '  eig      :', mp.norm(v1[i]*P - v1[i], 1), t1[i]


KMR_matrices[0] (4 x 4)
  stoch_eig: 0.0 0.000830173492432
  eig      : 1.26217744835362e-29 0.00916790962219
KMR_matrices[1] (11 x 11)
  stoch_eig: 1.63132611699963e-55 0.011785030365
  eig      : 4.1359030637281e-25 0.171113967896
KMR_matrices[2] (28 x 28)
  stoch_eig: 8.27180612649327e-25 0.147373199463
  eig      : 1.71961402999288e-16 3.47893309593

In [45]:
for i, P in enumerate(KMR_matrices):
    print 'KMR_matrices[{0}] ({1} x {2})'.format(i, P.rows, P.cols)
    %timeit mp.mp.stoch_eig(P)
    %timeit stoch_eig_by_eig(P)


KMR_matrices[0] (4 x 4)
1000 loops, best of 3: 579 µs per loop
100 loops, best of 3: 7.75 ms per loop
KMR_matrices[1] (11 x 11)
100 loops, best of 3: 6.61 ms per loop
10 loops, best of 3: 132 ms per loop
KMR_matrices[2] (28 x 28)
10 loops, best of 3: 89.2 ms per loop
1 loops, best of 3: 3.3 s per loop

Random matrics:


In [46]:
sizes = [10, 30, 50]
rand_matrices = []

for n in sizes:
    P = mp.randmatrix(n, n)
    for i in range(n):
        P[i, :] /= mp.fsum(P[i, :])
    rand_matrices.append(P)

In [47]:
v0, v1 = [], []
t0, t1 = [], []

for P in rand_matrices:
    start = time.time()
    v = mp.mp.stoch_eig(P)
    timediff = time.time() - start
    v0.append(v)
    t0.append(timediff)

for P in rand_matrices:
    start = time.time()
    v = stoch_eig_by_eig(P)
    timediff = time.time() - start
    v1.append(v)
    t1.append(timediff)

In [48]:
for i, P in enumerate(rand_matrices):
    print 'rand_matrices[{0}] ({1} x {2})'.format(i, P.rows, P.cols)
    print '  stoch_eig:', mp.norm(v0[i]*P - v0[i], 1), t0[i]
    print '  eig      :', mp.norm(v1[i]*P - v1[i], 1), t1[i]


rand_matrices[0] (10 x 10)
  stoch_eig: 8.32667268468867e-17 0.01411485672
  eig      : 7.46055447988703e-16 0.368566036224
rand_matrices[1] (30 x 30)
  stoch_eig: 6.24500451351651e-17 0.22004199028
  eig      : 5.41113024603921e-16 10.5731830597
rand_matrices[2] (50 x 50)
  stoch_eig: 6.59194920871187e-17 0.807505130768
  eig      : 5.87832790113702e-16 45.5745909214

In [49]:
for i, P in enumerate(rand_matrices):
    print 'rand_matrices[{0}] ({1} x {2})'.format(i, P.rows, P.cols)
    %timeit mp.mp.stoch_eig(P)
    %timeit stoch_eig_by_eig(P)


rand_matrices[0] (10 x 10)
100 loops, best of 3: 7.71 ms per loop
1 loops, best of 3: 367 ms per loop
rand_matrices[1] (30 x 30)
10 loops, best of 3: 175 ms per loop
1 loops, best of 3: 10.3 s per loop
rand_matrices[2] (50 x 50)
1 loops, best of 3: 764 ms per loop
1 loops, best of 3: 43 s per loop

In [50]:
import os.path
import sys

print("mpmath imported from %s" % os.path.dirname(mp.__file__))
print("mpmath backend: %s" % mp.libmp.backend.BACKEND)
print("mpmath mp class: %s" % repr(mp.mp))
print("mpmath version: %s" % mp.__version__)
print("Python version: %s" % sys.version)


mpmath imported from /usr/local/lib/python2.7/site-packages/mpmath
mpmath backend: python
mpmath mp class: <mpmath.ctx_mp.MPContext object at 0x10293ee50>
mpmath version: 0.19
Python version: 2.7.8 (default, Jul  2 2014, 10:14:46) 
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)]

In [50]: