In [ ]:
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
    !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/setup_colab.sh -O- | bash
    
    !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/week09_policy_II/runners.py
    !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/week09_policy_II/mujoco_wrappers.py
    
    !touch .setup_complete

# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
    !bash ../xvfb start
    os.environ['DISPLAY'] = ':1'

Implementing Proximal Policy Optimization

In this notebook you will be implementing Proximal Policy Optimization algorithm, scaled up version of which was used to train OpenAI Five to win against the world champions in Dota 2. You will be solving a continuous control environment on which it may be easier and faster to train an agent, however note that PPO here may not be the best algorithm as, for example, Deep Deterministic Policy Gradient and Soft Actor Critic may be more suited for continuous control environments. To run the environment you will need to install pybullet-gym which unlike MuJoCo does not require you to have a license.

To install the library:


In [ ]:
!git clone https://github.com/benelot/pybullet-gym lib/pybullet-gym
!pip install -e lib/pybullet-gym

The overall structure of the code is similar to the one in the A2C optional homework, but don't worry if you haven't done it, it should be relatively easy to figure it out. First, we will create an instance of the environment. We will normalize the observations and rewards, but before that you will need a wrapper that will write summaries, mainly, the total reward during an episode. You can either use one for TensorFlow implemented in atari_wrappers.py file from the optional A2C homework, or implement your own.


In [ ]:
import gym 
import pybulletgym

env = gym.make("HalfCheetahMuJoCoEnv-v0")
print("observation space: ", env.observation_space,
      "\nobservations:", env.reset())
print("action space: ", env.action_space, 
      "\naction_sample: ", env.action_space.sample())

In [ ]:
class Summaries(gym.Wrapper):
  """ Wrapper to write summaries. """
  def step(self, action):
    # TODO: implement writing summaries
    return self.env.step(action)
  
  def reset(self, **kwargs):
    # TODO: implement writing summaries
    return self.env.reset(**kwargs)

The normalization wrapper will subtract running mean from observations and rewards and divide the resulting quantities by the running variances.


In [ ]:
from mujoco_wrappers import Normalize

env = Normalize(Summaries(gym.make("HalfCheetahMuJoCoEnv-v0")));
env.unwrapped.seed(0);

Next, you will need to define a model for training. We suggest that you use two separate networks: one for policy and another for value function. Each network should be a 3-layer MLP with 64 hidden units, $\mathrm{tanh}$ activation function, kernel matrices initialized with orthogonal initializer with parameter $\sqrt{2}$ and biases initialized with zeros.

Our policy distribution is going to be multivariate normal with diagonal covariance. The network from above will predict the mean, and the covariance should be represented by a single (learned) vector of size 6 (corresponding to the dimensionality of the action space from above). You should initialize this vector to zero and take the exponent of it to always have a non-negative quantity.

Overall the model should return three things: predicted mean of the distribution, variance vector, value function.


In [ ]:
# import tensorflow as tf
# import torch

<Define your model here>

This model will be wrapped by a Policy. The policy can work in two modes, but in either case it is going to return dictionary with string-type keys. The first mode is when the policy is used to sample actions for a trajectory which will later be used for training. In this case the flag training passed to act method is False and the method should return a dict with the following keys:

  • "actions": actions to pass to the environment
  • "log_probs": log-probabilities of sampled actions
  • "values": value function $V^\pi(s)$ predictions.

We don't need to use the values under these keys for training, so all of them should be of type np.ndarray.

When training is True, the model is training on a given batch of observations. In this case it should return a dict with the following keys

  • "distribution": an instance of multivariate normal distribution (torch.distributions.MultivariateNormal or tf.distributions.MultivariateNormalDiag)
  • "values": value function $V^\pi(s)$ prediction.

The distinction about the modes comes into play depending on where the policy is used: if it is called from EnvRunner, the training flag is False, if it is called from PPO, the training flag is True. These classed will be described below.


In [ ]:
class Policy:
  def __init__(self, model):
    self.model = model
    
  def act(self, inputs, training=False):
    <TODO: Implement policy by calling model>
    # Should return a dict.

We will use EnvRunner to perform interactions with an environment with a policy for a fixed number of timesteps. Calling .get_next() on a runner will return a trajectory — dictionary containing keys

  • "observations"
  • "rewards"
  • "resets"
  • "actions"
  • all other keys that you defined in Policy,

under each of these keys there is a np.ndarray of specified length $T$ — the size of partial trajectory.

Additionally, before returning a trajectory this runner can apply a list of transformations. Each transformation is simply a callable that should modify passed trajectory in-place.


In [ ]:
class AsArray:
  """ 
  Converts lists of interactions to ndarray.
  """
  def __call__(self, trajectory):
    # Modify trajectory inplace. 
    for k, v in filter(lambda kv: kv[0] != "state",
                       trajectory.items()):
      trajectory[k] = np.asarray(v)

In [ ]:
import numpy as np
from runners import EnvRunner

class DummyPolicy:
  def act(self, inputs, training=False):
    assert not training
    return {"actions": np.random.randn(6), "values": np.nan}
  
runner = EnvRunner(env, DummyPolicy(), 3,
                   transforms=[AsArray()])
trajectory = runner.get_next()

{k: v.shape for k, v in trajectory.items() if k != "state"}

You will need to implement the following two transformations.

The first is GAE that implements Generalized Advantage Estimator. In it you should add two keys to the trajectory: "advantages" and "value_targets". In GAE the advantages $A_t^{\mathrm{GAE}(\gamma,\lambda)}$ are essentially defined as the exponential moving average with parameter $\lambda$ of the regular advantages $\hat{A}^{(T)}(s_t) = \sum_{l=0}^{T-1-t} \gamma^l r_{t+l} + \gamma^{T} V^\pi(s_{T}) - V^\pi(s_t)$. The exact formula for the computation is the following

$$ A_{t}^{\mathrm{GAE}(\gamma,\lambda)} = \sum_{l=0}^{T-1-t} (\gamma\lambda)^l\delta_{t + l}^V, \, t \in [0, T) $$

where $\delta_{t+l}^V = r_{t+l} + \gamma V^\pi(s_{t+l+1}) - V^\pi(s_{t+l})$. You can look at the derivation (formulas 11-16) in the paper. Don't forget to reset the summation on terminal states as determined by the flags trajectory["resets"]. You can use trajectory["values"] to get values of all observations except the most recent which is stored under trajectory["state"]["latest_observation"]. For this observation you will need to call the policy to get the value prediction.

Once you computed the advantages, you can get the targets for training the value function by adding back values: $$ \hat{V}(s_{t+l}) = A_{t+l}^{\mathrm{GAE}(\gamma,\lambda)} + V(s_{t + l}), $$ where $\hat{V}$ is a tensor of value targets that are used to train the value function.


In [ ]:
class GAE:
  """ Generalized Advantage Estimator. """
  def __init__(self, policy, gamma=0.99, lambda_=0.95):
    self.policy = policy
    self.gamma = gamma
    self.lambda_ = lambda_
    
  def __call__(self, trajectory):
    <TODO: implement>

The main advantage of PPO over simpler policy based methods like A2C is that it is possible to train on the same trajectory for multiple gradient steps. The following class wraps an EnvRunner. It should call the runner to get a trajectory, then return minibatches from it for a number of epochs, shuffling the data before each epoch.


In [ ]:
class TrajectorySampler:
  """ Samples minibatches from trajectory for a number of epochs. """
  def __init__(self, runner, num_epochs, num_minibatches, transforms=None):
    self.runner = runner
    self.num_epochs = num_epochs
    self.num_minibatches = num_minibatches
    self.transforms = transforms or []
    self.minibatch_count = 0
    self.epoch_count = 0
    self.trajectory = None
    
  def shuffle_trajectory(self):
    """ Shuffles all elements in trajectory.
    
    Should be called at the beginning of each epoch.
    """
    <TODO: implement>
    
  def get_next(self):
    """ Returns next minibatch.  """
    <TODO: implement>

A common trick to use with GAE is to normalize advantages, the following transformation does that.


In [ ]:
class NormalizeAdvantages:
  """ Normalizes advantages to have zero mean and variance 1. """
  def __call__(self, trajectory):
    adv = trajectory["advantages"]
    adv = (adv - adv.mean()) / (adv.std() + 1e-8)
    trajectory["advantages"] = adv

Finally, we can create our PPO runner.


In [ ]:
def make_ppo_runner(env, policy, num_runner_steps=2048,
                    gamma=0.99, lambda_=0.95, 
                    num_epochs=10, num_minibatches=32):
  """ Creates runner for PPO algorithm. """
  runner_transforms = [AsArray(),
                       GAE(policy, gamma=gamma, lambda_=lambda_)]
  runner = EnvRunner(env, policy, num_runner_steps, 
                     transforms=runner_transforms)
  
  sampler_transforms = [NormalizeAdvantages()]
  sampler = TrajectorySampler(runner, num_epochs=num_epochs, 
                              num_minibatches=num_minibatches,
                              transforms=sampler_transforms)
  return sampler

In the next cell you will need to implement Proximal Policy Optimization algorithm itself. The algorithm modifies the typical policy gradient loss in the following way:

$$ J_{\pi}(s, a) = \frac{\pi_\theta(a|s)}{\pi_\theta^{\text{old}}(a|s)} \cdot A^{\mathrm{GAE}(\gamma,\lambda)}(s, a) $$$$ J_{\pi}^{\text{clipped}}(s, a) = \mathrm{clip}\left( \frac{\pi_\theta(a|s)}{\pi_{\theta^{\text{old}}}(a|s)}, 1 - \text{cliprange}, 1 + \text{cliprange}\right)\cdot A^{\mathrm{GAE(\gamma, \lambda)}}(s)\\ $$$$ L_{\text{policy}} = -\frac{1}{T}\sum_{l=0}^{T-1}\min\left(J_\pi(s_{t + l}, a_{t + l}), J_{\pi}^{\text{clipped}}(s_{t + l}, a_{t + l})\right). $$

The value loss is also modified:

$$ L_{V}^{\text{clipped}} = \frac{1}{T}\sum_{l=0}^{T-1} \max(l^{simple}(s_{t + l}), l^{clipped}(s_{t + l})) $$

, where $l^{simple}$ is your standard critic loss $$ l^{simple}(s_{t + l}) = [V_\theta(s_{t+l}) - G(s_{t + l})]^2 $$

and $l^{clipped}$ is a clipped version that limits large changes of the value function: $$ l^{clipped}(s_{t + l}) = [ V_{\theta^{\text{old}}}(s_{t+l}) + \text{clip}\left( V_\theta(s_{t+l}) - V_{\theta^\text{old}}(s_{t+l}), -\text{cliprange}, \text{cliprange} \right) - G(s_{t + l})] ^ 2 $$


In [ ]:
class PPO:
  def __init__(self, policy, optimizer,
               cliprange=0.2,
               value_loss_coef=0.25,
               max_grad_norm=0.5):
    self.policy = policy
    self.optimizer = optimizer
    self.cliprange = cliprange
    self.value_loss_coef = value_loss_coef
    # Note that we don't need entropy regularization for this env.
    self.max_grad_norm = max_grad_norm
    
  def policy_loss(self, trajectory, act):
    """ Computes and returns policy loss on a given trajectory. """
    <TODO: implement>
      
  def value_loss(self, trajectory, act):
    """ Computes and returns value loss on a given trajectory. """
    <TODO: implement>
      
  def loss(self, trajectory):
    act = self.policy.act(trajectory["observations"], training=True)
    policy_loss = self.policy_loss(trajectory, act)
    value_loss = self.value_loss(trajectory, act)
    return policy_loss + self.value_loss_coef * value_loss
      
  def step(self, trajectory):
    """ Computes the loss function and performs a single gradient step. """
    <TODO: implement>

Now everything is ready to do training. In one million of interactions it should be possible to achieve the total raw reward of about 1500. You should plot this quantity with respect to runner.step_var — the number of interactions with the environment. It is highly encouraged to also provide plots of the following quantities (these are useful for debugging as well):

  • Coefficient of Determination between value targets and value predictions
  • Entropy of the policy $\pi$
  • Value loss
  • Policy loss
  • Value targets
  • Value predictions
  • Gradient norm
  • Advantages

For optimization it is suggested to use Adam optimizer with linearly annealing learning rate from 3e-4 to 0 and epsilon 1e-5.


In [ ]: