In [ ]:
# launch XVFB if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
    !bash ../xvfb start
    os.environ['DISPLAY'] = ':1'

Let's make a TRPO!

In this notebook we will write the code of the one Trust Region Policy Optimization. As usually, it contains a few different parts which we are going to reproduce.


In [ ]:
import numpy as np
import tensorflow as tf

In [ ]:
import gym

env = gym.make("Acrobot-v1")
env.reset()
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
print("Observation Space", env.observation_space)
print("Action Space", env.action_space)

In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(env.render('rgb_array'))

Step 1: Defining a network

With all it's complexity, at it's core TRPO is yet another policy gradient method.

This essentially means we're actually training a stochastic policy $ \pi_\theta(a|s) $.

And yes, it's gonna be a neural network. So let's start by defining one.


In [ ]:
# input tensors
observations_ph = tf.placeholder(
    shape=(None, observation_shape[0]), dtype=tf.float32)
# Actions that we made
actions_ph = tf.placeholder(shape=(None,), dtype=tf.int32)
# "G = r + gamma*r' + gamma^2*r'' + ..."
cummulative_returns_ph = tf.placeholder(shape=(None,), dtype=tf.float32)
# Action probabilities from previous iteration
old_probs_ph = tf.placeholder(shape=(None, n_actions), dtype=tf.float32)

all_inputs = [observations_ph, actions_ph,
              cummulative_returns_ph, old_probs_ph]

In [ ]:
def denselayer(name, x, out_dim, nonlinearity=None):
    with tf.variable_scope(name):
        if nonlinearity is None:
            nonlinearity = tf.identity

        x_shape = x.get_shape().as_list()

        w = tf.get_variable('w', shape=[x_shape[1], out_dim])
        b = tf.get_variable(
            'b', shape=[out_dim], initializer=tf.constant_initializer(0))
        o = nonlinearity(tf.matmul(x, w) + b)

        return o


sess = tf.InteractiveSession()

nn = observations_ph

<YOUR CODE: your network here>

policy_out = <YOUR CODE: layer that predicts action log-probabilities>

probs_out = tf.exp(policy_out)

weights = tf.trainable_variables()
sess.run(tf.global_variables_initializer())

Step 2: Actions and rollouts

In this section, we'll define functions that take actions $ a \sim \pi_\theta(a|s) $ and rollouts $ <s_0,a_0,s_1,a_1,s_2,a_2,...s_n,a_n> $.


In [ ]:
# compile function


def act(obs, sample=True):
    """
    Samples action from policy distribution (sample = True) or takes most likely action (sample = False)
    :param: obs - single observation vector
    :param sample: if True, samples from \pi, otherwise takes most likely action
    :returns: action (single integer) and probabilities for all actions
    """

    probs = sess.run(probs_out, feed_dict={
                     observations_ph: obs.reshape((1, -1))})[0]

    if sample:
        action = int(np.random.choice(n_actions, p=probs))
    else:
        action = int(np.argmax(probs))

    return action, probs

In [ ]:
# demo
print("sampled:", [act(env.reset()) for _ in range(5)])
print("greedy:", [act(env.reset(), sample=False) for _ in range(5)])

Compute cummulative reward just like you did in vanilla REINFORCE


In [ ]:
import scipy.signal


def get_cummulative_returns(r, gamma=1):
    """
    Computes cummulative discounted rewards given immediate rewards
    G_i = r_i + gamma*r_{i+1} + gamma^2*r_{i+2} + ...
    Also known as R(s,a).
    """
    r = np.array(r)
    assert r.ndim >= 1
    return scipy.signal.lfilter([1], [1, -gamma], r[::-1], axis=0)[::-1]

In [ ]:
# simple demo on rewards [0,0,1,0,0,1]
get_cummulative_returns([0, 0, 1, 0, 0, 1], gamma=0.9)

Rollout


In [ ]:
def rollout(env, act, max_pathlength=2500, n_timesteps=50000):
    """
    Generate rollouts for training.
    :param: env - environment in which we will make actions to generate rollouts.
    :param: act - the function that can return policy and action given observation.
    :param: max_pathlength - maximum size of one path that we generate.
    :param: n_timesteps - total sum of sizes of all pathes we generate.
    """
    paths = []

    total_timesteps = 0
    while total_timesteps < n_timesteps:
        obervations, actions, rewards, action_probs = [], [], [], []
        obervation = env.reset()
        for _ in range(max_pathlength):
            action, policy = act(obervation)
            obervations.append(obervation)
            actions.append(action)
            action_probs.append(policy)
            obervation, reward, done, _ = env.step(action)
            rewards.append(reward)
            total_timesteps += 1
            if done or total_timesteps == n_timesteps:
                path = {"observations": np.array(obervations),
                        "policy": np.array(action_probs),
                        "actions": np.array(actions),
                        "rewards": np.array(rewards),
                        "cumulative_returns": get_cummulative_returns(rewards),
                        }
                paths.append(path)
                break
    return paths

In [ ]:
paths = rollout(env, act, max_pathlength=5, n_timesteps=100)
print(paths[-1])
assert (paths[0]['policy'].shape == (5, n_actions))
assert (paths[0]['cumulative_returns'].shape == (5,))
assert (paths[0]['rewards'].shape == (5,))
assert (paths[0]['observations'].shape == (5,)+observation_shape)
assert (paths[0]['actions'].shape == (5,))
print('It\'s ok')

Step 3: loss functions

Now let's define the loss functions and constraints for actual TRPO training.

The surrogate reward should be $$J_{surr}= {1 \over N} \sum\limits_{i=0}^N \frac{\pi_{\theta}(s_i, a_i)}{\pi_{\theta_{old}}(s_i, a_i)}A_{\theta_{old}(s_i, a_i)}$$

For simplicity, let's use cummulative returns instead of advantage for now: $$J'_{surr}= {1 \over N} \sum\limits_{i=0}^N \frac{\pi_{\theta}(s_i, a_i)}{\pi_{\theta_{old}}(s_i, a_i)}G_{\theta_{old}(s_i, a_i)}$$

Or alternatively, minimize the surrogate loss: $$ L_{surr} = - J'_{surr} $$


In [ ]:
# select probabilities of chosen actions
batch_size = tf.shape(observations_ph)[0]
probs_all = tf.reshape(probs_out, [-1])
probs_for_actions = tf.gather(probs_all, tf.range(
    0, batch_size) * n_actions + actions_ph)
old_probs_all = tf.reshape(old_probs_ph, [-1])
old_probs_for_actions = tf.gather(
    old_probs_all, tf.range(0, batch_size) * n_actions + actions_ph)

In [ ]:
# Compute surrogate loss: negative importance-sampled policy gradient

L_surr = <YOUR CODE: compute surrogate loss, aka _negative_ importance-sampled policy gradient>

In [ ]:
# compute and return surrogate policy gradient
def var_shape(x):
    return [k.value for k in x.get_shape()]


def numel(x):
    return np.prod(var_shape(x))


def flatgrad(loss, var_list):
    grads = tf.gradients(loss, var_list)
    return tf.concat([tf.reshape(grad, [numel(v)])
                      for (v, grad) in zip(var_list, grads)], 0)


flat_grad_surr = flatgrad(L_surr, weights)

We can ascend these gradients as long as our $pi_\theta(a|s)$ satisfies the constraint $$E_{s,\pi_{\Theta_{t}}}\Big[KL(\pi(\Theta_{t}, s) \:||\:\pi(\Theta_{t+1}, s))\Big] < \alpha$$

where

$$KL(p||q) = E _p log({p \over q})$$

In [ ]:
# Compute Kullback-Leibler divergence (see formula above)
# Note: you need to sum KL and entropy over all actions, not just the ones agent took
old_log_probs = tf.log(old_probs_ph+1e-10)

kl = <YOUR CODE: compute Kullback-Leibler as per formula above>

# Compute policy entropy
entropy = <YOUR CODE: compute policy entropy. Don't forget the sign!>

losses = [L_surr, kl, entropy]

Linear search

TRPO in its core involves ascending surrogate policy gradient constrained by KL divergence.

In order to enforce this constraint, we're gonna use linesearch. You can find out more about it here


In [ ]:
def linesearch(f, x, fullstep, max_kl):
    """
    Linesearch finds the best parameters of neural networks in the direction of fullstep contrainted by KL divergence.
    :param: f - function that returns loss, kl and arbitrary third component.
    :param: x - old parameters of neural network.
    :param: fullstep - direction in which we make search.
    :param: max_kl - constraint of KL divergence.
    :returns:
    """
    max_backtracks = 10
    loss, _, _ = f(x)
    for stepfrac in .5**np.arange(max_backtracks):
        xnew = x + stepfrac * fullstep
        new_loss, kl, _ = f(xnew)
        actual_improve = new_loss - loss
        if kl <= max_kl and actual_improve < 0:
            x = xnew
            loss = new_loss
    return x

Step 4: training

In this section we construct rest parts of our computational graph


In [ ]:
def slice_vector(vector, shapes):
    """
    Slices symbolic vector into several symbolic tensors of given shapes.
    Auxilary function used to un-flatten gradients, tangents etc.
    :param vector: 1-dimensional symbolic vector
    :param shapes: list or tuple of shapes (list, tuple or symbolic)
    :returns: list of symbolic tensors of given shapes
    """
    assert len(vector.get_shape()) == 1, "vector must be 1-dimensional"
    start = 0
    tensors = []
    for shape in shapes:
        size = np.prod(shape)
        tensor = tf.reshape(vector[start:(start + size)], shape)
        tensors.append(tensor)
        start += size
    return tensors

In [ ]:
# intermediate grad in conjugate_gradient
conjugate_grad_intermediate_vector = tf.placeholder(
    dtype=tf.float32, shape=(None,))

# slice flat_tangent into chunks for each weight
weight_shapes = [sess.run(var).shape for var in weights]
tangents = slice_vector(conjugate_grad_intermediate_vector, weight_shapes)

# KL divergence where first arg is fixed
kl_firstfixed = tf.reduce_sum((tf.stop_gradient(probs_out) * (tf.stop_gradient(
    tf.log(probs_out)) - tf.log(probs_out)))) / tf.cast(batch_size, tf.float32)

# compute fisher information matrix (used for conjugate gradients and to estimate KL)
gradients = tf.gradients(kl_firstfixed, weights)
gradient_vector_product = [tf.reduce_sum(
    g[0] * t) for (g, t) in zip(gradients, tangents)]

fisher_vec_prod = flatgrad(gradient_vector_product, weights)

TRPO helpers

Here we define a few helper functions used in the main TRPO loop

Conjugate gradients

Since TRPO includes contrainted optimization, we will need to solve Ax=b using conjugate gradients.

In general, CG is an algorithm that solves Ax=b where A is positive-defined. A is Hessian matrix so A is positive-defined. You can find out more about them here


In [ ]:
from numpy.linalg import inv


def conjugate_gradient(f_Ax, b, cg_iters=10, residual_tol=1e-10):
    """
    This method solves system of equation Ax=b using iterative method called conjugate gradients
    :f_Ax: function that returns Ax
    :b: targets for Ax
    :cg_iters: how many iterations this method should do
    :residual_tol: epsilon for stability
    """
    p = b.copy()
    r = b.copy()
    x = np.zeros_like(b)
    rdotr = r.dot(r)
    for i in range(cg_iters):
        z = f_Ax(p)
        v = rdotr / (p.dot(z) + 1e-8)
        x += v * p
        r -= v * z
        newrdotr = r.dot(r)
        mu = newrdotr / (rdotr + 1e-8)
        p = r + mu * p
        rdotr = newrdotr
        if rdotr < residual_tol:
            break
    return x

In [ ]:
# This code validates conjugate gradients
A = np.random.rand(8, 8)
A = np.matmul(np.transpose(A), A)


def f_Ax(x):
    return np.matmul(A, x.reshape(-1, 1)).reshape(-1)


b = np.random.rand(8)

w = np.matmul(np.matmul(inv(np.matmul(np.transpose(A), A)),
                        np.transpose(A)), b.reshape((-1, 1))).reshape(-1)
print(w)
print(conjugate_gradient(f_Ax, b))

In [ ]:
# Compile a function that exports network weights as a vector
flat_weights = tf.concat([tf.reshape(var, [-1]) for var in weights], axis=0)

# ... and another function that imports vector back into network weights
flat_weights_placeholder = tf.placeholder(tf.float32, shape=(None,))
assigns = slice_vector(flat_weights_placeholder, weight_shapes)

load_flat_weights = [w.assign(ph) for w, ph in zip(weights, assigns)]
Step 5: Main TRPO loop

Here we will train our network!


In [ ]:
import time
from itertools import count
from collections import OrderedDict

# this is hyperparameter of TRPO. It controls how big KL divergence may be between old and new policy every step.
max_kl = 0.01
cg_damping = 0.1  # This parameters regularize addition to
numeptotal = 0  # this is number of episodes that we played.

start_time = time.time()

for i in count(1):

    print("\n********** Iteration %i ************" % i)

    # Generating paths.
    print("Rollout")
    paths = rollout(env, act)
    print("Made rollout")

    # Updating policy.
    observations = np.concatenate([path["observations"] for path in paths])
    actions = np.concatenate([path["actions"] for path in paths])
    returns = np.concatenate([path["cumulative_returns"] for path in paths])
    old_probs = np.concatenate([path["policy"] for path in paths])
    inputs_batch = [observations, actions, returns, old_probs]
    feed_dict = {observations_ph: observations,
                 actions_ph: actions,
                 old_probs_ph: old_probs,
                 cummulative_returns_ph: returns,
                 }
    old_weights = sess.run(flat_weights)

    def fisher_vector_product(p):
        """gets intermediate grads (p) and computes fisher*vector """
        feed_dict[conjugate_grad_intermediate_vector] = p
        return sess.run(fisher_vec_prod, feed_dict) + cg_damping * p

    flat_grad = sess.run(flat_grad_surr, feed_dict)

    stepdir = conjugate_gradient(fisher_vector_product, -flat_grad)
    shs = .5 * stepdir.dot(fisher_vector_product(stepdir))
    lm = np.sqrt(shs / max_kl)
    fullstep = stepdir / lm

    # Compute new weights with linesearch in the direction we found with CG

    def losses_f(flat_weights):
        feed_dict[flat_weights_placeholder] = flat_weights
        sess.run(load_flat_weights, feed_dict)
        return sess.run(losses, feed_dict)

    new_weights = linesearch(losses_f, old_weights, fullstep, max_kl)
    feed_dict[flat_weights_placeholder] = new_weights
    sess.run(load_flat_weights, feed_dict)

    # Report current progress
    L_surr, kl, entropy = sess.run(losses, feed_dict)
    episode_rewards = np.array([path["rewards"].sum() for path in paths])

    stats = OrderedDict()
    numeptotal += len(episode_rewards)
    stats["Total number of episodes"] = numeptotal
    stats["Average sum of rewards per episode"] = episode_rewards.mean()
    stats["Std of rewards per episode"] = episode_rewards.std()
    stats["Entropy"] = entropy
    stats["Time elapsed"] = "%.2f mins" % ((time.time() - start_time)/60.)
    stats["KL between old and new distribution"] = kl
    stats["Surrogate loss"] = L_surr
    for k, v in stats.items():
        print(k + ": " + " " * (40 - len(k)) + str(v))
    i += 1

Homework option I: better sampling (10+pts)

In this section, you're invited to implement a better rollout strategy called vine.

In most gym environments, you can actually backtrack by using states. You can find a wrapper that saves/loads states in the mcts seminar.

You can read more about in the TRPO article in section 5.2.

The goal here is to implement such rollout policy (we recommend using tree data structure like in the seminar above). Then you can assign cummulative rewards similar to get_cummulative_rewards, but for a tree.

bonus task - parallelize samples using multiple cores

Homework option II (10+pts)

Let's use TRPO to train evil robots! (pick any of two)

The catch here is that those environments have continuous action spaces.

Luckily, TRPO is a policy gradient method, so it's gonna work for any parametric $\pi_\theta(a|s)$. We recommend starting with gaussian policy:

$$\pi_\theta(a|s) = N(\mu_\theta(s),\sigma^2_\theta(s)) = {1 \over \sqrt { 2 \pi {\sigma^2}_\theta(s) } } e^{ (a - \mu_\theta(s))^2 \over 2 {\sigma^2}_\theta(s) } $$

In the $\sqrt { 2 \pi {\sigma^2}_\theta(s) }$ clause, $\pi$ means ~3.1415926, not agent's policy.

This essentially means that you will need two output layers:

  • $\mu_\theta(s)$, a dense layer with linear activation
  • ${\sigma^2}_\theta(s)$, a dense layer with activation tf.exp (to make it positive; like rho from bandits)

For multidimensional actions, you can use fully factorized gaussian (basically a vector of gaussians).

bonus task: compare performance of continuous action space method to action space discretization


In [ ]: