Tensor Network Random Unitary Evolution

This example demonstrates some features of TensorNetwork manipulation as well as the use of MatrixProductState.gate, based on 'evolving' an intitial MPS with many random nearest neighbour unitaries.


In [1]:
%matplotlib inline

In [2]:
from quimb.tensor import *
from quimb import *
import numpy as np

First we specify how sites we want, how many gates to apply, and some other parameters:


In [3]:
# the initial state
n = 50
cyclic = False
chi = 4  # intial bond dimension
psi = MPS_rand_state(n, chi, cyclic=cyclic, tags='KET', dtype='complex128')

# the gates
n_gates = 5 * n
gates = [rand_uni(4) for _ in range(n_gates)]
u_tags = [f'U{i}' for i in range(n_gates)]

We generate a unique tag for each gate we will apply, which we can also use to address all the gates only.

Then we apply each gate to the MPS inplace:


In [4]:
for U, t in zip(gates, u_tags):
    # generate a random coordinate
    i = np.random.randint(0, n - int(not cyclic))
    
    # apply the next gate to the coordinate
    #     propagate_tags='sites' (the default in fact) specifies that the
    #     new gate tensor should inherit the site tags from tensors it acts on
    psi.gate_(U, where=[i, i + 1], tags=t, propagate_tags='sites')

In [5]:
psi.graph(color=['KET'])


To make the graph a bit neater we can supply some fixed positions:


In [6]:
fix = {
    # [key - tags that uniquely locate a tensor]: [val - (x, y) coord]
    **{('KET', f'I{i}'): (i, +10) for i in range(n)},
    # can also use a external index, 'k0' etc, as a key to fix it
    **{f'k{i}': (i, -10) for i in range(n)},
}

When fixing graphs, it might also be necessary to play with the spring parameter k:


In [7]:
psi.graph(fix=fix, k=0.001, color=['I5', 'I15', 'I25', 'I35', 'I45'])


We can see the 'lightcone' effect of adding propagate_tags='sites.

Next let's form the norm overlap, and add one tag to all the gates, and another to all the non-gate tensors:


In [8]:
psiH = psi.H
psiH.retag_({'KET': 'BRA'})  # specify this to distinguish 

norm = (psiH | psi)
norm.add_tag('UGs', where=u_tags, which='any')
norm.add_tag('VEC0', where=u_tags, which='!any')

In [9]:
norm.graph(color=['VEC0', 'UGs'])


Again, it's a bit messy so we can specify some positions for some tensors:


In [10]:
fix = {
    **{(f'I{i}', 'KET', 'VEC0'): (i, -20) for i in range(n)},
    **{(f'I{i}', 'BRA', 'VEC0'): (i, +20) for i in range(n)},
}

iterations can also be increased if the graph is not relaxing well.


In [11]:
(psiH | psi).graph(
    color=['VEC0', 'UGs', 'I5', 'I15', 'I25', 'I35', 'I45'], 
    node_size=30, 
    iterations=500,
    fix=fix, k=0.0001)


Later color tags take precedence over earlier ones.

Since this circuit is still relatively low depth, we can fully contract it as well:


In [12]:
# this calculates an opimized path for the contraction, which is cached 
#     the path can also be inspected with `print(expr)`
expr = (psi.H | psi).contract(all, get='path-info')

In [13]:
%%time
(psi.H | psi) ^ all


CPU times: user 294 ms, sys: 7.99 ms, total: 302 ms
Wall time: 233 ms
Out[13]:
(0.9999999999999869-3.018418848199644e-16j)

Manually perform partial trace

Here, to perform the partial trace we need to do two things. (i) Make a copy of the vector to be the 'bra' with different indices, (ii) match up the subsystem indices we want to trace out in the 'ket' and 'bra':


In [14]:
# make a 'bra' vector copy with 'upper' indices
psiH = psi.H
psiH.retag_({'KET': 'BRA'})
# this automatically reindexes the TN
psiH.site_ind_id = 'b{}'

# define two subsystems
sysa = range(15, 35)
sysb = [i for i in range(n) if i not in sysa]

# join indices for sysb only
psi.reindex_sites('dummy_ptr{}', sysb, inplace=True)
psiH.reindex_sites('dummy_ptr{}', sysb, inplace=True)

rho_ab = (psiH | psi)
rho_ab


Out[14]:
<TensorNetwork(tensors=600, structure='I{}', nsites=50)>

In [15]:
fix = {
    **{f'k{i}': (i, -10) for i in range(n)},
    **{(f'I{i}', 'KET', 'VEC0'): (i, 0) for i in range(n)},
    **{(f'I{i}', 'BRA', 'VEC0'): (i, 10) for i in range(n)},
    **{f'b{i}': (i, 20) for i in range(n)},
}

Again we can graph this:


In [16]:
rho_ab.graph(color=['VEC0'] + [f'I{i}' for i in sysa], iterations=500, fix=fix, k=0.001)


Estimate Subsystem Entropy

We can treat this whole reduced density matrix as an effective linear operator, $A$, then calculate for example its entropy as a spectral sum function, $-\text{Tr}(A \log_2 A)$. First we set the left and right indices, and turn it into a scipy.sparse.linalg.LinearOperator:


In [17]:
right_ix = [f'b{i}' for i in sysa]
left_ix = [f'k{i}' for i in sysa]

rho_ab_lo = rho_ab.aslinearoperator(left_ix, right_ix)
rho_ab_lo


Out[17]:
<1048576x1048576 TNLinearOperator with dtype=complex128>

This can be quite slow, so wise to check progress:


In [18]:
S_a = - approx_spectral_function(rho_ab_lo, f=xlogx, verbosity=1, R=10)


LANCZOS f(A) CALC: tol=0.01, tau=0.0001, R=10, bsz=1
k=45: Returning estimate -4.694127783003048.
Repeat 1: estimate is -4.694127783003048
k=45: Returning estimate -4.4119694962119.
Repeat 2: estimate is -4.4119694962119
k=44: Returning estimate -5.3279663992240005.
Repeat 3: estimate is -5.3279663992240005
Total estimate = -4.81135455947965 ± 0.22114307556886795
k=42: Returning estimate -5.966798016320431.
Repeat 4: estimate is -5.966798016320431
Total estimate = -5.100215423689845 ± 0.30014845548866037
k=43: Returning estimate -5.23851012159077.
Repeat 5: estimate is -5.23851012159077
Total estimate = -5.12787436327003 ± 0.24138979796914864
k=44: Returning estimate -5.9415184143818704.
Repeat 6: estimate is -5.9415184143818704
Total estimate = -5.263481705122004 ± 0.2361970927833128
k=44: Returning estimate -5.428367981881803.
Repeat 7: estimate is -5.428367981881803
Total estimate = -5.287036887516261 ± 0.20362580511634576
k=45: Returning estimate -4.830906132092729.
Repeat 8: estimate is -4.830906132092729
Total estimate = -5.230020543088319 ± 0.18598379947686963
k=40: Returning estimate -5.7462303344963965.
Repeat 9: estimate is -5.7462303344963965
Total estimate = -5.287377186578105 ± 0.1739385020638112
k=45: Returning estimate -4.163913458445849.
Repeat 10: estimate is -4.163913458445849
Total estimate = -5.175030813764879 ± 0.18938258832591615
ESTIMATE is -5.175030813764879 ± 0.18938258832591615

Which yields the final entropy (in bits) of the central 20 qubits as:


In [19]:
S_a


Out[19]:
5.175030813764879

Since TNLinearOperator repeatedly calls the same effective matrix-vector tensor contraction and does not require high precision this kind of computation is also ideally suited to being compiled into a GPU expression using tensorflow for example.