In [1]:
%matplotlib inline
import quimb as qu
import quimb.tensor as qtn
from quimb.tensor.optimize_tensorflow import TNOptimizer
# create a session to explicitly not use eager-mode
import tensorflow as tf
sess = tf.InteractiveSession()
First, find a (dense) PBC groundstate, $| gs \rangle$:
In [2]:
L = 16
H = qu.ham_heis(L, sparse=True, cyclic=True)
gs = qu.groundstate(H)
Then we convert it to a dense 1D 'tensor network':
In [3]:
# this converts the dense vector to an effective 1D tensor network (with only one tensor)
target = qtn.Dense1D(gs)
print(target)
Next we create an initial guess random MPS, $|\psi\rangle$:
In [4]:
bond_dim = 32
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)
mps.graph()
We now need to set-up the function that 'prepares' our tensor network. In the current example this involves making sure the state is always normalized.
In [5]:
def normalize_state(psi):
return psi / (psi.H @ psi) ** 0.5
Then we need to set-up our 'loss' function, the function that returns the scalar quantity we want to minimize.
In [6]:
def negative_overlap(psi, target):
return - (psi.H @ target) ** 2 # minus so as to minimize
In [7]:
optmzr = TNOptimizer(
mps, # our initial input, the tensors of which to optimize
loss_fn=negative_overlap,
norm_fn=normalize_state,
loss_constants={'target': target}, # this is a constant TN to supply to loss_fn
)
Now we are ready to optimize our tensor network! Note how we supplied the constant tensor network target - its tensors will not be changed.
In [8]:
mps_opt = optmzr.optimize(100) # perform 100 gradient descent steps
The output optimized (and normalize) tensor netwwork has already been converted back to numpy:
In [9]:
type(mps_opt[0].data)
Out[9]:
And we can explicitly check the returned state indeed matches the loss shown above:
In [10]:
((mps_opt.H & target) ^ all) ** 2
Out[10]:
Other things to think about might be:
'scipy' is another good choice which defaults to the L-BFGS-B algorithm.from quimb.tensor.optimize_pytorch import TNOptimizer, though you won't be able to optimize non-real tensor networks