In [1]:
%matplotlib inline
In [2]:
from quimb.tensor import *
from quimb import *
import numpy as np
First we specify how sites we want, how many gates to apply, and some other parameters:
In [3]:
# the initial state
n = 50
cyclic = False
chi = 4 # intial bond dimension
psi = MPS_rand_state(n, chi, cyclic=cyclic, tags='KET', dtype='complex128')
# the gates
n_gates = 5 * n
gates = [rand_uni(4) for _ in range(n_gates)]
u_tags = [f'U{i}' for i in range(n_gates)]
We generate a unique tag for each gate we will apply, which we can also use to address all the gates only.
Then we apply each gate to the MPS inplace:
In [4]:
for U, t in zip(gates, u_tags):
# generate a random coordinate
i = np.random.randint(0, n - int(not cyclic))
# apply the next gate to the coordinate
# propagate_tags='sites' (the default in fact) specifies that the
# new gate tensor should inherit the site tags from tensors it acts on
psi.gate_(U, where=[i, i + 1], tags=t, propagate_tags='sites')
In [5]:
psi.graph(color=['KET'])
To make the graph a bit neater we can supply some fixed positions:
In [6]:
fix = {
# [key - tags that uniquely locate a tensor]: [val - (x, y) coord]
**{('KET', f'I{i}'): (i, +10) for i in range(n)},
# can also use a external index, 'k0' etc, as a key to fix it
**{f'k{i}': (i, -10) for i in range(n)},
}
When fixing graphs, it might also be necessary to play with the spring parameter k:
In [7]:
psi.graph(fix=fix, k=0.001, color=['I5', 'I15', 'I25', 'I35', 'I45'])
We can see the 'lightcone' effect of adding propagate_tags='sites.
Next let's form the norm overlap, and add one tag to all the gates, and another to all the non-gate tensors:
In [8]:
psiH = psi.H
psiH.retag_({'KET': 'BRA'}) # specify this to distinguish
norm = (psiH | psi)
norm.add_tag('UGs', where=u_tags, which='any')
norm.add_tag('VEC0', where=u_tags, which='!any')
In [9]:
norm.graph(color=['VEC0', 'UGs'])
Again, it's a bit messy so we can specify some positions for some tensors:
In [10]:
fix = {
**{(f'I{i}', 'KET', 'VEC0'): (i, -20) for i in range(n)},
**{(f'I{i}', 'BRA', 'VEC0'): (i, +20) for i in range(n)},
}
iterations can also be increased if the graph is not relaxing well.
In [11]:
(psiH | psi).graph(
color=['VEC0', 'UGs', 'I5', 'I15', 'I25', 'I35', 'I45'],
node_size=30,
iterations=500,
fix=fix, k=0.0001)
Later color tags take precedence over earlier ones.
Since this circuit is still relatively low depth, we can fully contract it as well:
In [12]:
# this calculates an opimized path for the contraction, which is cached
# the path can also be inspected with `print(expr)`
expr = (psi.H | psi).contract(all, get='path-info')
In [13]:
%%time
(psi.H | psi) ^ all
Out[13]:
Here, to perform the partial trace we need to do two things. (i) Make a copy of the vector to be the 'bra' with different indices, (ii) match up the subsystem indices we want to trace out in the 'ket' and 'bra':
In [14]:
# make a 'bra' vector copy with 'upper' indices
psiH = psi.H
psiH.retag_({'KET': 'BRA'})
# this automatically reindexes the TN
psiH.site_ind_id = 'b{}'
# define two subsystems
sysa = range(15, 35)
sysb = [i for i in range(n) if i not in sysa]
# join indices for sysb only
psi.reindex_sites('dummy_ptr{}', sysb, inplace=True)
psiH.reindex_sites('dummy_ptr{}', sysb, inplace=True)
rho_ab = (psiH | psi)
rho_ab
Out[14]:
In [15]:
fix = {
**{f'k{i}': (i, -10) for i in range(n)},
**{(f'I{i}', 'KET', 'VEC0'): (i, 0) for i in range(n)},
**{(f'I{i}', 'BRA', 'VEC0'): (i, 10) for i in range(n)},
**{f'b{i}': (i, 20) for i in range(n)},
}
Again we can graph this:
In [16]:
rho_ab.graph(color=['VEC0'] + [f'I{i}' for i in sysa], iterations=500, fix=fix, k=0.001)
In [17]:
right_ix = [f'b{i}' for i in sysa]
left_ix = [f'k{i}' for i in sysa]
rho_ab_lo = rho_ab.aslinearoperator(left_ix, right_ix)
rho_ab_lo
Out[17]:
This can be quite slow, so wise to check progress:
In [18]:
S_a = - approx_spectral_function(rho_ab_lo, f=xlogx, verbosity=1, R=10)
Which yields the final entropy (in bits) of the central 20 qubits as:
In [19]:
S_a
Out[19]:
Since TNLinearOperator repeatedly calls the same effective matrix-vector tensor contraction and does not require high precision this kind of computation is also ideally suited to being compiled into a GPU expression using tensorflow for example.