Chapter 16 – Reinforcement Learning
This notebook contains all the sample code and solutions to the exersices in chapter 16.
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
In [1]:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
import sys
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures and animations
%matplotlib nbagg
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rl"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Note: there may be minor differences between the output of this notebook and the examples shown in the book. You can safely ignore these differences. They are mainly due to the fact that most of the environments provided by OpenAI gym have some randomness.
In this notebook we will be using OpenAI gym, a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning agents to interact with. Let's start by importing gym
:
In [2]:
import gym
Next we will load the MsPacman environment, version 0.
In [3]:
env = gym.make('MsPacman-v0')
Let's initialize the environment by calling is reset()
method. This returns an observation:
In [4]:
obs = env.reset()
Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels: Red, Green and Blue). In other environments it may return different objects, as we will see later.
In [5]:
obs.shape
Out[5]:
An environment can be visualized by calling its render()
method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set mode="rgb_array"
to get an image of the environment as a NumPy array:
In [6]:
img = env.render(mode="rgb_array")
Let's plot this image:
In [7]:
plt.figure(figsize=(5,4))
plt.imshow(img)
plt.axis("off")
save_fig("MsPacman")
plt.show()
Welcome back to the 1980s! :)
In this environment, the rendered image is simply equal to the observation (but in many environments this is not the case):
In [8]:
(img == obs).all()
Out[8]:
Let's create a little helper function to plot an environment:
In [9]:
def plot_environment(env, figsize=(5,4)):
plt.close() # or else nbagg sometimes plots in the previous cell
plt.figure(figsize=figsize)
img = env.render(mode="rgb_array")
plt.imshow(img)
plt.axis("off")
plt.show()
Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:
In [10]:
env.action_space
Out[10]:
Discrete(9)
means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left).
Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps:
In [11]:
env.reset()
for step in range(110):
env.step(3) #left
for step in range(40):
env.step(8) #lower-left
Where are we now?
In [12]:
plot_environment(env)
The step()
function actually returns several important objects:
In [13]:
obs, reward, done, info = env.step(0)
The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image:
In [14]:
obs.shape
Out[14]:
The environment also tells the agent how much reward it got during the last step:
In [15]:
reward
Out[15]:
When the game is over, the environment returns done=True
:
In [16]:
done
Out[16]:
Finally, info
is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating).
In [17]:
info
Out[17]:
Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame:
In [18]:
frames = []
n_max_steps = 1000
n_change_steps = 10
obs = env.reset()
for step in range(n_max_steps):
img = env.render(mode="rgb_array")
frames.append(img)
if step % n_change_steps == 0:
action = env.action_space.sample() # play randomly
obs, reward, done, info = env.step(action)
if done:
break
Now show the animation (it's a bit jittery within Jupyter):
In [19]:
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,
def plot_animation(frames, repeat=False, interval=40):
plt.close() # or else nbagg sometimes plots in the previous cell
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval)
In [20]:
video = plot_animation(frames)
plt.show()
Once you have finished playing with an environment, you should close it to free up resources:
In [21]:
env.close()
To code our first learning agent, we will be using a simpler environment: the Cart-Pole.
The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.
In [22]:
env = gym.make("CartPole-v0")
In [23]:
obs = env.reset()
In [24]:
obs
Out[24]:
The observation is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity. Let's render the environment... unfortunately we need to fix an annoying rendering issue first.
Some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify the rgb_array
mode. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like Xvfb. You can start Jupyter using the xvfb-run
command:
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
If Jupyter is running on a headless server but you don't want to worry about Xvfb, then you can just use the following rendering function for the Cart-Pole:
In [25]:
from PIL import Image, ImageDraw
try:
from pyglet.gl import gl_info
openai_cart_pole_rendering = True # no problem, let's use OpenAI gym's rendering function
except Exception:
openai_cart_pole_rendering = False # probably no X server available, let's use our own rendering function
def render_cart_pole(env, obs):
if openai_cart_pole_rendering:
# use OpenAI gym's rendering function
return env.render(mode="rgb_array")
else:
# rendering for the cart pole environment (in case OpenAI gym can't do it)
img_w = 600
img_h = 400
cart_w = img_w // 12
cart_h = img_h // 15
pole_len = img_h // 3.5
pole_w = img_w // 80 + 1
x_width = 2
max_ang = 0.2
bg_col = (255, 255, 255)
cart_col = 0x000000 # Blue Green Red
pole_col = 0x669acc # Blue Green Red
pos, vel, ang, ang_vel = obs
img = Image.new('RGB', (img_w, img_h), bg_col)
draw = ImageDraw.Draw(img)
cart_x = pos * img_w // x_width + img_w // x_width
cart_y = img_h * 95 // 100
top_pole_x = cart_x + pole_len * np.sin(ang)
top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang)
draw.line((0, cart_y, img_w, cart_y), fill=0)
draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) # draw cart
draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) # draw pole
return np.array(img)
def plot_cart_pole(env, obs):
plt.close() # or else nbagg sometimes plots in the previous cell
img = render_cart_pole(env, obs)
plt.imshow(img)
plt.axis("off")
plt.show()
In [26]:
plot_cart_pole(env, obs)
Now let's look at the action space:
In [27]:
env.action_space
Out[27]:
Yep, just two possible actions: accelerate towards the left or towards the right. Let's push the cart left until the pole falls:
In [28]:
obs = env.reset()
while True:
obs, reward, done, info = env.step(0)
if done:
break
In [29]:
plt.close() # or else nbagg sometimes plots in the previous cell
img = render_cart_pole(env, obs)
plt.imshow(img)
plt.axis("off")
save_fig("cart_pole_plot")
In [30]:
img.shape
Out[30]:
Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead:
In [31]:
obs = env.reset()
while True:
obs, reward, done, info = env.step(1)
if done:
break
In [32]:
plot_cart_pole(env, obs)
Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a policy for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do.
Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and vice versa. Let's see if that works:
In [33]:
frames = []
n_max_steps = 1000
n_change_steps = 10
obs = env.reset()
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
# hard-coded policy
position, velocity, angle, angular_velocity = obs
if angle < 0:
action = 0
else:
action = 1
obs, reward, done, info = env.step(action)
if done:
break
In [34]:
video = plot_animation(frames)
plt.show()
Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that!
Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will first estimate a probability for each action, then select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability p
of the action 0 (left), and of course the probability of action 1 (right) will be 1 - p
.
Note: instead of using the fully_connected()
function from the tensorflow.contrib.layers
module (as in the book), we now use the dense()
function from the tf.layers
module, which did not exist when this chapter was written. This is preferable because anything in contrib may change or be deleted without notice, while tf.layers
is part of the official API. As you will see, the code is mostly the same.
The main differences relevant to this chapter are:
_fn
suffix was removed in all the parameters that had it (for example the activation_fn
parameter was renamed to activation
).weights
parameter was renamed to kernel
,None
instead of tf.nn.relu
In [35]:
import tensorflow as tf
# 1. Specify the network architecture
n_inputs = 4 # == env.observation_space.shape[0]
n_hidden = 4 # it's a simple task, we don't need more than this
n_outputs = 1 # only outputs the probability of accelerating left
initializer = tf.variance_scaling_initializer()
# 2. Build the neural network
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu,
kernel_initializer=initializer)
outputs = tf.layers.dense(hidden, n_outputs, activation=tf.nn.sigmoid,
kernel_initializer=initializer)
# 3. Select a random action based on the estimated probabilities
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
init = tf.global_variables_initializer()
In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state.
You may wonder why we are picking a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between exploring new actions and exploiting the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried.
Let's randomly initialize this policy neural network and use it to play one game:
In [36]:
n_max_steps = 1000
frames = []
with tf.Session() as sess:
init.run()
obs = env.reset()
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
if done:
break
env.close()
Now let's look at how well this randomly initialized policy network performed:
In [37]:
video = plot_animation(frames)
plt.show()
Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right. The following code defines the same neural network but we add the target probabilities y
, and the training operations (cross_entropy
, optimizer
and training_op
):
In [38]:
import tensorflow as tf
reset_graph()
n_inputs = 4
n_hidden = 4
n_outputs = 1
learning_rate = 0.01
initializer = tf.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
y = tf.placeholder(tf.float32, shape=[None, n_outputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits) # probability of action 0 (left)
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(cross_entropy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done.
In [39]:
n_environments = 10
n_iterations = 1000
envs = [gym.make("CartPole-v0") for _ in range(n_environments)]
observations = [env.reset() for env in envs]
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) # if angle<0 we want proba(left)=1., or else proba(left)=0.
action_val, _ = sess.run([action, training_op], feed_dict={X: np.array(observations), y: target_probas})
for env_index, env in enumerate(envs):
obs, reward, done, info = env.step(action_val[env_index][0])
observations[env_index] = obs if not done else env.reset()
saver.save(sess, "./my_policy_net_basic.ckpt")
for env in envs:
env.close()
In [40]:
def render_policy_net(model_path, action, X, n_max_steps = 1000):
frames = []
env = gym.make("CartPole-v0")
obs = env.reset()
with tf.Session() as sess:
saver.restore(sess, model_path)
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
if done:
break
env.close()
return frames
In [41]:
frames = render_policy_net("./my_policy_net_basic.ckpt", action, X)
video = plot_animation(frames)
plt.show()
Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own.
To train this neural network we will need to define the target probabilities y
. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the credit assignment problem.
The Policy Gradients algorithm tackles this problem by first playing multiple games, then making the actions in good games slightly more likely, while actions in bad games are made slightly less likely. First we play, then we go back and think about what we did.
In [42]:
import tensorflow as tf
reset_graph()
n_inputs = 4
n_hidden = 4
n_outputs = 1
learning_rate = 0.01
initializer = tf.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits) # probability of action 0 (left)
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
y = 1. - tf.to_float(action)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder, variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
In [43]:
def discount_rewards(rewards, discount_rate):
discounted_rewards = np.zeros(len(rewards))
cumulative_rewards = 0
for step in reversed(range(len(rewards))):
cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate
discounted_rewards[step] = cumulative_rewards
return discounted_rewards
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards]
In [44]:
discount_rewards([10, 0, -50], discount_rate=0.8)
Out[44]:
In [45]:
discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8)
Out[45]:
In [46]:
env = gym.make("CartPole-v0")
n_games_per_update = 10
n_max_steps = 1000
n_iterations = 250
save_iterations = 10
discount_rate = 0.95
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
print("\rIteration: {}".format(iteration), end="")
all_rewards = []
all_gradients = []
for game in range(n_games_per_update):
current_rewards = []
current_gradients = []
obs = env.reset()
for step in range(n_max_steps):
action_val, gradients_val = sess.run([action, gradients], feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
current_rewards.append(reward)
current_gradients.append(gradients_val)
if done:
break
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate)
feed_dict = {}
for var_index, gradient_placeholder in enumerate(gradient_placeholders):
mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index]
for game_index, rewards in enumerate(all_rewards)
for step, reward in enumerate(rewards)], axis=0)
feed_dict[gradient_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iterations == 0:
saver.save(sess, "./my_policy_net_pg.ckpt")
In [47]:
env.close()
In [48]:
frames = render_policy_net("./my_policy_net_pg.ckpt", action, X, n_max_steps=1000)
video = plot_animation(frames)
plt.show()
In [49]:
transition_probabilities = [
[0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3
[0.0, 0.0, 0.9, 0.1], # from s1 to ...
[0.0, 1.0, 0.0, 0.0], # from s2 to ...
[0.0, 0.0, 0.0, 1.0], # from s3 to ...
]
n_max_steps = 50
def print_sequence(start_state=0):
current_state = start_state
print("States:", end=" ")
for step in range(n_max_steps):
print(current_state, end=" ")
if current_state == 3:
break
current_state = np.random.choice(range(4), p=transition_probabilities[current_state])
else:
print("...", end="")
print()
for _ in range(10):
print_sequence()
In [50]:
transition_probabilities = [
[[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # in s0, if action a0 then proba 0.7 to state s0 and 0.3 to state s1, etc.
[[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]],
[None, [0.8, 0.1, 0.1], None],
]
rewards = [
[[+10, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, -50]],
[[0, 0, 0], [+40, 0, 0], [0, 0, 0]],
]
possible_actions = [[0, 1, 2], [0, 2], [1]]
def policy_fire(state):
return [0, 2, 1][state]
def policy_random(state):
return np.random.choice(possible_actions[state])
def policy_safe(state):
return [0, 0, 1][state]
class MDPEnvironment(object):
def __init__(self, start_state=0):
self.start_state=start_state
self.reset()
def reset(self):
self.total_rewards = 0
self.state = self.start_state
def step(self, action):
next_state = np.random.choice(range(3), p=transition_probabilities[self.state][action])
reward = rewards[self.state][action][next_state]
self.state = next_state
self.total_rewards += reward
return self.state, reward
def run_episode(policy, n_steps, start_state=0, display=True):
env = MDPEnvironment()
if display:
print("States (+rewards):", end=" ")
for step in range(n_steps):
if display:
if step == 10:
print("...", end=" ")
elif step < 10:
print(env.state, end=" ")
action = policy(env.state)
state, reward = env.step(action)
if display and step < 10:
if reward:
print("({})".format(reward), end=" ")
if display:
print("Total rewards =", env.total_rewards)
return env.total_rewards
for policy in (policy_fire, policy_random, policy_safe):
all_totals = []
print(policy.__name__)
for episode in range(1000):
all_totals.append(run_episode(policy, n_steps=100, display=(episode<5)))
print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals)))
print()
Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy).
In [51]:
n_states = 3
n_actions = 3
n_steps = 20000
alpha = 0.01
gamma = 0.99
exploration_policy = policy_random
q_values = np.full((n_states, n_actions), -np.inf)
for state, actions in enumerate(possible_actions):
q_values[state][actions]=0
env = MDPEnvironment()
for step in range(n_steps):
action = exploration_policy(env.state)
state = env.state
next_state, reward = env.step(action)
next_value = np.max(q_values[next_state]) # greedy policy
q_values[state, action] = (1-alpha)*q_values[state, action] + alpha*(reward + gamma * next_value)
In [52]:
def optimal_policy(state):
return np.argmax(q_values[state])
In [53]:
q_values
Out[53]:
In [54]:
all_totals = []
for episode in range(1000):
all_totals.append(run_episode(optimal_policy, n_steps=100, display=(episode<5)))
print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals)))
print()
Warning: Unfortunately, the first version of the book contained two important errors in this section.
$y(s,a) = \text{r} + \gamma . \underset{a'}{\max} \, Q_\text{target}(s', a')$
I hope these errors did not affect you, and if they did, I sincerely apologize.
In [55]:
env = gym.make("MsPacman-v0")
obs = env.reset()
obs.shape
Out[55]:
In [56]:
env.action_space
Out[56]:
Preprocessing the images is optional but greatly speeds up training.
In [57]:
mspacman_color = 210 + 164 + 74
def preprocess_observation(obs):
img = obs[1:176:2, ::2] # crop and downsize
img = img.sum(axis=2) # to greyscale
img[img==mspacman_color] = 0 # Improve contrast
img = (img // 3 - 128).astype(np.int8) # normalize from -128 to 127
return img.reshape(88, 80, 1)
img = preprocess_observation(obs)
Note: the preprocess_observation()
function is slightly different from the one in the book: instead of representing pixels as 64-bit floats from -1.0 to 1.0, it represents them as signed bytes (from -128 to 127). The benefit is that the replay memory will take up roughly 8 times less RAM (about 6.5 GB instead of 52 GB). The reduced precision has no visible impact on training.
In [58]:
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Original observation (160×210 RGB)")
plt.imshow(obs)
plt.axis("off")
plt.subplot(122)
plt.title("Preprocessed observation (88×80 greyscale)")
plt.imshow(img.reshape(88, 80), interpolation="nearest", cmap="gray")
plt.axis("off")
save_fig("preprocessing_plot")
plt.show()
Note: instead of using tf.contrib.layers.convolution2d()
or tf.contrib.layers.conv2d()
(as in the first version of the book), we now use the tf.layers.conv2d()
, which did not exist when this chapter was written. This is preferable because anything in contrib may change or be deleted without notice, while tf.layers
is part of the official API. As you will see, the code is mostly the same, except that the parameter names have changed slightly:
num_outputs
parameter was renamed to filters
,stride
parameter was renamed to strides
,_fn
suffix was removed from parameter names that had it (e.g., activation_fn
was renamed to activation
),weights_initializer
parameter was renamed to kernel_initializer
,"kernel"
(instead of "weights"
), and the biases variable was renamed from "biases"
to "bias"
,activation
is now None
instead of tf.nn.relu
.
In [59]:
reset_graph()
input_height = 88
input_width = 80
input_channels = 1
conv_n_maps = [32, 64, 64]
conv_kernel_sizes = [(8,8), (4,4), (3,3)]
conv_strides = [4, 2, 1]
conv_paddings = ["SAME"] * 3
conv_activation = [tf.nn.relu] * 3
n_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each
n_hidden = 512
hidden_activation = tf.nn.relu
n_outputs = env.action_space.n # 9 discrete actions are available
initializer = tf.variance_scaling_initializer()
def q_network(X_state, name):
prev_layer = X_state / 128.0 # scale pixel intensities to the [-1.0, 1.0] range.
with tf.variable_scope(name) as scope:
for n_maps, kernel_size, strides, padding, activation in zip(
conv_n_maps, conv_kernel_sizes, conv_strides,
conv_paddings, conv_activation):
prev_layer = tf.layers.conv2d(
prev_layer, filters=n_maps, kernel_size=kernel_size,
strides=strides, padding=padding, activation=activation,
kernel_initializer=initializer)
last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_in])
hidden = tf.layers.dense(last_conv_layer_flat, n_hidden,
activation=hidden_activation,
kernel_initializer=initializer)
outputs = tf.layers.dense(hidden, n_outputs,
kernel_initializer=initializer)
trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope=scope.name)
trainable_vars_by_name = {var.name[len(scope.name):]: var
for var in trainable_vars}
return outputs, trainable_vars_by_name
In [60]:
X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width,
input_channels])
online_q_values, online_vars = q_network(X_state, name="q_networks/online")
target_q_values, target_vars = q_network(X_state, name="q_networks/target")
copy_ops = [target_var.assign(online_vars[var_name])
for var_name, target_var in target_vars.items()]
copy_online_to_target = tf.group(*copy_ops)
In [61]:
online_vars
Out[61]:
In [62]:
learning_rate = 0.001
momentum = 0.95
with tf.variable_scope("train"):
X_action = tf.placeholder(tf.int32, shape=[None])
y = tf.placeholder(tf.float32, shape=[None, 1])
q_value = tf.reduce_sum(online_q_values * tf.one_hot(X_action, n_outputs),
axis=1, keepdims=True)
error = tf.abs(y - q_value)
clipped_error = tf.clip_by_value(error, 0.0, 1.0)
linear_error = 2 * (error - clipped_error)
loss = tf.reduce_mean(tf.square(clipped_error) + linear_error)
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True)
training_op = optimizer.minimize(loss, global_step=global_step)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Note: in the first version of the book, the loss function was simply the squared error between the target Q-Values (y
) and the estimated Q-Values (q_value
). However, because the experiences are very noisy, it is better to use a quadratic loss only for small errors (below 1.0) and a linear loss (twice the absolute error) for larger errors, which is what the code above computes. This way large errors don't push the model parameters around as much. Note that we also tweaked some hyperparameters (using a smaller learning rate, and using Nesterov Accelerated Gradients rather than Adam optimization, since adaptive gradient algorithms may sometimes be bad, according to this paper). We also tweaked a few other hyperparameters below (a larger replay memory, longer decay for the $\epsilon$-greedy policy, larger discount rate, less frequent copies of the online DQN to the target DQN, etc.).
We use this ReplayMemory
class instead of a deque
because it is much faster for random access (thanks to @NileshPS who contributed it). Moreover, we default to sampling with replacement, which is much faster than sampling without replacement for large replay memories.
In [63]:
class ReplayMemory:
def __init__(self, maxlen):
self.maxlen = maxlen
self.buf = np.empty(shape=maxlen, dtype=np.object)
self.index = 0
self.length = 0
def append(self, data):
self.buf[self.index] = data
self.length = min(self.length + 1, self.maxlen)
self.index = (self.index + 1) % self.maxlen
def sample(self, batch_size, with_replacement=True):
if with_replacement:
indices = np.random.randint(self.length, size=batch_size) # faster
else:
indices = np.random.permutation(self.length)[:batch_size]
return self.buf[indices]
In [64]:
replay_memory_size = 500000
replay_memory = ReplayMemory(replay_memory_size)
In [65]:
def sample_memories(batch_size):
cols = [[], [], [], [], []] # state, action, reward, next_state, continue
for memory in replay_memory.sample(batch_size):
for col, value in zip(cols, memory):
col.append(value)
cols = [np.array(col) for col in cols]
return cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1)
In [66]:
eps_min = 0.1
eps_max = 1.0
eps_decay_steps = 2000000
def epsilon_greedy(q_values, step):
epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps)
if np.random.rand() < epsilon:
return np.random.randint(n_outputs) # random action
else:
return np.argmax(q_values) # optimal action
In [67]:
n_steps = 4000000 # total number of training steps
training_start = 10000 # start training after 10,000 game iterations
training_interval = 4 # run a training step every 4 game iterations
save_steps = 1000 # save the model every 1,000 training steps
copy_steps = 10000 # copy online DQN to target DQN every 10,000 training steps
discount_rate = 0.99
skip_start = 90 # Skip the start of every game (it's just waiting time).
batch_size = 50
iteration = 0 # game iterations
checkpoint_path = "./my_dqn.ckpt"
done = True # env needs to be reset
A few variables for tracking progress:
In [68]:
loss_val = np.infty
game_length = 0
total_max_q = 0
mean_max_q = 0.0
And now the main training loop!
In [69]:
with tf.Session() as sess:
if os.path.isfile(checkpoint_path + ".index"):
saver.restore(sess, checkpoint_path)
else:
init.run()
copy_online_to_target.run()
while True:
step = global_step.eval()
if step >= n_steps:
break
iteration += 1
print("\rIteration {}\tTraining step {}/{} ({:.1f})%\tLoss {:5f}\tMean Max-Q {:5f} ".format(
iteration, step, n_steps, step * 100 / n_steps, loss_val, mean_max_q), end="")
if done: # game over, start again
obs = env.reset()
for skip in range(skip_start): # skip the start of each game
obs, reward, done, info = env.step(0)
state = preprocess_observation(obs)
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [state]})
action = epsilon_greedy(q_values, step)
# Online DQN plays
obs, reward, done, info = env.step(action)
next_state = preprocess_observation(obs)
# Let's memorize what happened
replay_memory.append((state, action, reward, next_state, 1.0 - done))
state = next_state
# Compute statistics for tracking progress (not shown in the book)
total_max_q += q_values.max()
game_length += 1
if done:
mean_max_q = total_max_q / game_length
total_max_q = 0.0
game_length = 0
if iteration < training_start or iteration % training_interval != 0:
continue # only train after warmup period and at regular intervals
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (
sample_memories(batch_size))
next_q_values = target_q_values.eval(
feed_dict={X_state: X_next_state_val})
max_next_q_values = np.max(next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * discount_rate * max_next_q_values
# Train the online DQN
_, loss_val = sess.run([training_op, loss], feed_dict={
X_state: X_state_val, X_action: X_action_val, y: y_val})
# Regularly copy the online DQN to the target DQN
if step % copy_steps == 0:
copy_online_to_target.run()
# And save regularly
if step % save_steps == 0:
saver.save(sess, checkpoint_path)
You can interrupt the cell above at any time to test your agent using the cell below. You can then run the cell above once again, it will load the last parameters saved and resume training.
In [70]:
frames = []
n_max_steps = 10000
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
obs = env.reset()
for step in range(n_max_steps):
state = preprocess_observation(obs)
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [state]})
action = np.argmax(q_values)
# Online DQN plays
obs, reward, done, info = env.step(action)
img = env.render(mode="rgb_array")
frames.append(img)
if done:
break
In [71]:
plot_animation(frames)
Out[71]:
Here is a preprocessing function you can use to train a DQN for the Breakout-v0 Atari game:
In [72]:
def preprocess_observation(obs):
img = obs[34:194:2, ::2] # crop and downsize
return np.mean(img, axis=2).reshape(80, 80) / 255.0
In [73]:
env = gym.make("Breakout-v0")
obs = env.reset()
for step in range(10):
obs, _, _, _ = env.step(1)
img = preprocess_observation(obs)
In [74]:
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Original observation (160×210 RGB)")
plt.imshow(obs)
plt.axis("off")
plt.subplot(122)
plt.title("Preprocessed observation (80×80 grayscale)")
plt.imshow(img, interpolation="nearest", cmap="gray")
plt.axis("off")
plt.show()
As you can see, a single image does not give you the direction and speed of the ball, which are crucial informations for playing this game. For this reason, it is best to actually combine several consecutive observations to create the environment's state representation. One way to do that is to create a multi-channel image, with one channel per recent observation. Another is to merge all recent observations into a single-channel image, using np.max()
. In this case, we need to dim the older images so that the DQN can distinguish the past from the present.
In [75]:
from collections import deque
def combine_observations_multichannel(preprocessed_observations):
return np.array(preprocessed_observations).transpose([1, 2, 0])
def combine_observations_singlechannel(preprocessed_observations, dim_factor=0.5):
dimmed_observations = [obs * dim_factor**index
for index, obs in enumerate(reversed(preprocessed_observations))]
return np.max(np.array(dimmed_observations), axis=0)
n_observations_per_state = 3
preprocessed_observations = deque([], maxlen=n_observations_per_state)
obs = env.reset()
for step in range(10):
obs, _, _, _ = env.step(1)
preprocessed_observations.append(preprocess_observation(obs))
In [76]:
img1 = combine_observations_multichannel(preprocessed_observations)
img2 = combine_observations_singlechannel(preprocessed_observations)
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Multichannel state")
plt.imshow(img1, interpolation="nearest")
plt.axis("off")
plt.subplot(122)
plt.title("Singlechannel state")
plt.imshow(img2, interpolation="nearest", cmap="gray")
plt.axis("off")
plt.show()
See Appendix A.
Exercise: Use policy gradients to tackle OpenAI gym's "BipedalWalker-v2".
In [77]:
import gym
In [78]:
env = gym.make("BipedalWalker-v2")
Note: if you run into this issue ("module 'Box2D._Box2D' has no attribute 'RAND_LIMIT'
") when making the BipedalWalker-v2
environment, then try this workaround:
$ pip uninstall Box2D-kengz
$ pip install git+https://github.com/pybox2d/pybox2d
In [79]:
obs = env.reset()
In [80]:
img = env.render(mode="rgb_array")
In [81]:
plt.imshow(img)
plt.axis("off")
plt.show()
In [82]:
obs
Out[82]:
You can find the meaning of each of these 24 numbers in the documentation.
In [83]:
env.action_space
Out[83]:
In [84]:
env.action_space.low
Out[84]:
In [85]:
env.action_space.high
Out[85]:
This is a 4D continuous action space controling each leg's hip torque and knee torque (from -1 to 1). To deal with a continuous action space, one method is to discretize it. For example, let's limit the possible torque values to these 3 values: -1.0, 0.0, and 1.0. This means that we are left with $3^4=81$ possible actions.
In [86]:
from itertools import product
In [87]:
possible_torques = np.array([-1.0, 0.0, 1.0])
possible_actions = np.array(list(product(possible_torques, possible_torques, possible_torques, possible_torques)))
possible_actions.shape
Out[87]:
In [88]:
tf.reset_default_graph()
# 1. Specify the network architecture
n_inputs = env.observation_space.shape[0] # == 24
n_hidden = 10
n_outputs = len(possible_actions) # == 625
initializer = tf.variance_scaling_initializer()
# 2. Build the neural network
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.selu,
kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs,
kernel_initializer=initializer)
outputs = tf.nn.softmax(logits)
# 3. Select a random action based on the estimated probabilities
action_index = tf.squeeze(tf.multinomial(logits, num_samples=1), axis=-1)
# 4. Training
learning_rate = 0.01
y = tf.one_hot(action_index, depth=len(possible_actions))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder, variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Let's try running this policy network, although it is not trained yet.
In [89]:
def run_bipedal_walker(model_path=None, n_max_steps = 1000):
env = gym.make("BipedalWalker-v2")
frames = []
with tf.Session() as sess:
if model_path is None:
init.run()
else:
saver.restore(sess, model_path)
obs = env.reset()
for step in range(n_max_steps):
img = env.render(mode="rgb_array")
frames.append(img)
action_index_val = action_index.eval(feed_dict={X: obs.reshape(1, n_inputs)})
action = possible_actions[action_index_val]
obs, reward, done, info = env.step(action[0])
if done:
break
env.close()
return frames
In [90]:
frames = run_bipedal_walker()
video = plot_animation(frames)
plt.show()
Nope, it really can't walk. So let's train it!
In [91]:
n_games_per_update = 10
n_max_steps = 1000
n_iterations = 1000
save_iterations = 10
discount_rate = 0.95
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
print("\rIteration: {}/{}".format(iteration + 1, n_iterations), end="")
all_rewards = []
all_gradients = []
for game in range(n_games_per_update):
current_rewards = []
current_gradients = []
obs = env.reset()
for step in range(n_max_steps):
action_index_val, gradients_val = sess.run([action_index, gradients],
feed_dict={X: obs.reshape(1, n_inputs)})
action = possible_actions[action_index_val]
obs, reward, done, info = env.step(action[0])
current_rewards.append(reward)
current_gradients.append(gradients_val)
if done:
break
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate)
feed_dict = {}
for var_index, gradient_placeholder in enumerate(gradient_placeholders):
mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index]
for game_index, rewards in enumerate(all_rewards)
for step, reward in enumerate(rewards)], axis=0)
feed_dict[gradient_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iterations == 0:
saver.save(sess, "./my_bipedal_walker_pg.ckpt")
In [92]:
frames = run_bipedal_walker("./my_bipedal_walker_pg.ckpt")
video = plot_animation(frames)
plt.show()
Not the best walker, but at least it stays up and makes (slow) progress to the right. A better solution for this problem is to use an actor-critic algorithm, as it does not require discretizing the action space, and it converges much faster. Check out this nice blog post by Yash Patel for more details.
Let's explore the Pong-v0
OpenAI Gym environment.
In [93]:
import gym
env = gym.make('Pong-v0')
obs = env.reset()
In [94]:
obs.shape
Out[94]:
In [95]:
env.action_space
Out[95]:
We see the observation space is a 210x160 RGB image. The action space is a Discrete(6)
space with 6 different actions: actions 0 and 1 do nothing, actions 2 and 4 move the paddle up, and finally actions 3 and 5 move the paddle down. The paddle is free to move immediately but the ball does not appear until after 18 steps into the episode.
Let's play a game with a completely random policy and plot the resulting animation.
In [96]:
# A helper function to run an episode of Pong. It's first argument should be a
# function which takes the observation of the environment and the current
# iteration and produces an action for the agent to take.
def run_episode(policy, n_max_steps=1000, frames_per_action=1):
obs = env.reset()
frames = []
for i in range(n_max_steps):
obs, reward, done, info = env.step(policy(obs, i))
frames.append(env.render(mode='rgb_array'))
if done:
break
return plot_animation(frames)
In [97]:
run_episode(lambda obs, i: np.random.randint(0, 5))
The random policy does not fare very well. So let's try to use the DQN and see if we can do better.
First let's write a preprocessing function to scale down the input state. Since a single observation does not tell us about the ball's velocity, we will also need to combine multiple observations into a single state. Below is the preprocessing code for this environment. The preprocessing algorithm is two-fold:
Convert the image in the observation to an image to only black and white and scale it down to 80x80 pixels.
Combine 3 observations into a single state which depicts the velocity of the paddles and the ball.
In [98]:
green_paddle_color = (92, 186, 92)
red_paddle_color = (213, 130, 74)
background_color = (144, 72, 17)
ball_color = (236, 236, 236)
def preprocess_observation(obs):
img = obs[34:194:2, ::2].reshape(-1, 3)
tmp = np.full(shape=(80 * 80), fill_value=0.0, dtype=np.float32)
for i, c in enumerate(img):
c = tuple(c)
if c in {green_paddle_color, red_paddle_color, ball_color}:
tmp[i] = 1.0
else:
tmp[i] = 0.0
return tmp.reshape(80, 80)
In [99]:
obs = env.reset()
for _ in range(25):
obs, _, _, _ = env.step(0)
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title('Original Observation (160 x 210 RGB)')
plt.imshow(obs)
plt.axis('off')
plt.subplot(122)
plt.title('Preprocessed Observation (80 x 80 Grayscale)')
plt.imshow(preprocess_observation(obs), interpolation='nearest', cmap='gray')
plt.axis('off')
plt.show()
In [100]:
def combine_observations(preprocess_observations, dim_factor=0.75):
dimmed = [obs * (dim_factor ** idx)
for idx, obs in enumerate(reversed(preprocess_observations))]
return np.max(np.array(dimmed), axis=0)
In [101]:
n_observations_per_state = 3
obs = env.reset()
for _ in range(20):
obs, _, _, _ = env.step(0)
preprocess_observations = []
for _ in range(n_observations_per_state):
obs, _, _, _ = env.step(2)
preprocess_observations.append(preprocess_observation(obs))
img = combine_observations(preprocess_observations)
plt.figure(figsize=(6, 6))
plt.title('Combined Observations as a Single State')
plt.imshow(img, interpolation='nearest', cmap='gray')
plt.axis('off')
plt.show()
Now we are going to build the DQN. Like the DQN for Pac-Man, this model will train 3 convolutional layers, then a hidden fully connected layer, then finally a fully connected layer with 6 neurons, one representing each possible output.
In [102]:
reset_graph()
input_width = 80
input_height = 80
input_channels = 1
conv_n_maps = [32, 64, 64]
conv_kernel_sizes = [9, 5, 3]
conv_kernel_strides = [4, 2, 1]
conv_paddings = ['VALID'] * 3
conv_activation = [tf.nn.relu] * 3
n_hidden_in = 5 * 5 * 64
n_hidden = 512
hidden_activation = tf.nn.relu
n_outputs = env.action_space.n
he_init = tf.contrib.layers.variance_scaling_initializer()
This model will use two DQNs, an online DQN and a target DQN. The online DQN learns new parameters at each training step. The target DQN is used to compute the target Q-Values for the online DQN's loss function during training. The online DQN's parameters are copied to the target DQN at regular intervals.
In [103]:
def q_network(X_state, name):
prev_layer = X_state
with tf.variable_scope(name) as scope:
for n_maps, kernel_size, strides, padding, activation in zip(
conv_n_maps, conv_kernel_sizes, conv_kernel_strides, conv_paddings,
conv_activation):
prev_layer = tf.layers.conv2d(prev_layer, filters=n_maps,
kernel_size=kernel_size,
strides=strides, padding=padding,
activation=activation,
kernel_initializer=he_init)
flattened = tf.reshape(prev_layer, [-1, n_hidden_in])
hidden = tf.layers.dense(flattened, n_hidden,
activation=hidden_activation,
kernel_initializer=he_init)
outputs = tf.layers.dense(hidden, n_outputs, kernel_initializer=he_init)
trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope=scope.name)
trainable_vars_by_name = {var.name[len(scope.name):]: var
for var in trainable_vars}
return outputs, trainable_vars_by_name
In [104]:
# Starting the DQN definition.
X_state = tf.placeholder(tf.float32, shape=(None, input_height, input_width,
input_channels))
online_q_values, online_vars = q_network(X_state, 'q_networks/online')
target_q_values, target_vars = q_network(X_state, 'q_networks/target')
copy_ops = [var.assign(online_vars[name]) for name, var in target_vars.items()]
copy_online_to_target = tf.group(*copy_ops)
In [105]:
# Defining the training objective.
learning_rate = 1e-3
momentum = 0.95
with tf.variable_scope('training') as scope:
X_action = tf.placeholder(tf.int32, shape=(None,))
y = tf.placeholder(tf.float32, shape=(None, 1))
Q_target = tf.reduce_sum(online_q_values * tf.one_hot(X_action, n_outputs),
axis=1, keepdims=True)
error = tf.abs(y - Q_target)
loss = tf.reduce_mean(tf.square(error))
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum,
use_nesterov=True)
training_op = optimizer.minimize(loss, global_step=global_step)
In [106]:
init = tf.global_variables_initializer()
saver = tf.train.Saver()
This model will sample past experiences from a Replay Memory, this will hopefully help the model learn what higher level patterns to pay attention to to find the right action. It also reduces the chance that the model's behavior gets too correlated to it's most recent experiences.
The replay memory will store its data in the kernel's memory.
In [107]:
class ReplayMemory(object):
def __init__(self, maxlen):
self.maxlen = maxlen
self.buf = np.empty(shape=maxlen, dtype=np.object)
self.index = 0
self.length = 0
def append(self, data):
self.buf[self.index] = data
self.index += 1
self.index %= self.maxlen
self.length = min(self.length + 1, self.maxlen)
def sample(self, batch_size):
return self.buf[np.random.randint(self.length, size=batch_size)]
In [108]:
replay_size = 200000
replay_memory = ReplayMemory(replay_size)
In [109]:
def sample_memories(batch_size):
cols = [[], [], [], [], []] # state, action, reward, next_state, continue
for memory in replay_memory.sample(batch_size):
for col, value in zip(cols, memory):
col.append(value)
cols = [np.array(col) for col in cols]
return cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], \
cols[4].reshape(-1, 1)
Now let's define the model's policy during training. Just like in MsPacMan.ipynb
, we will use an $\varepsilon$-greedy policy.
In [110]:
eps_min = 0.1
eps_max = 1.0
eps_decay_steps = 6000000
def epsilon_greedy(q_values, step):
epsilon = min(eps_min,
eps_max - ((eps_max - eps_min) * (step / eps_decay_steps)))
if np.random.random() < epsilon:
return np.random.randint(n_outputs)
return np.argmax(q_values)
Now we will train the model to play some Pong. The model will input an action once every 3 frames. The preprocessing functions defined above will use the 3 frames to compute the state the model will use to
In [111]:
n_steps = 10000000
training_start = 100000
training_interval = 4
save_steps = 1000
copy_steps = 10000
discount_rate = 0.95
skip_start = 20
batch_size = 50
iteration = 0
done = True # To reset the environment at the start.
loss_val = np.infty
game_length = 0
total_max_q = 0.0
mean_max_q = 0.0
checkpoint_path = "./pong_dqn.ckpt"
In [112]:
# Utility function to get the environment state for the model.
def perform_action(action):
preprocess_observations = []
total_reward = 0.0
for i in range(3):
obs, reward, done, info = env.step(action)
total_reward += reward
if done:
for _ in range(i, 3):
preprocess_observations.append(preprocess_observation(obs))
break
else:
preprocess_observations.append(preprocess_observation(obs))
return combine_observations(preprocess_observations).reshape(80, 80, 1), \
total_reward, done
In [113]:
# Main training loop
with tf.Session() as sess:
if os.path.isfile(checkpoint_path + '.index'):
saver.restore(sess, checkpoint_path)
else:
init.run()
copy_online_to_target.run()
while True:
step = global_step.eval()
if step >= n_steps:
break
iteration += 1
print('\rIteration {}\tTraining step {}/{} ({:.1f})%\tLoss {:5f}'
'\tMean Max-Q {:5f} '.format(
iteration, step, n_steps, 100 * step / n_steps, loss_val,
mean_max_q),
end='')
if done:
obs = env.reset()
for _ in range(skip_start):
obs, reward, done, info = env.step(0)
state, reward, done = perform_action(0)
# Evaluate the next action for the agent.
q_values = online_q_values.eval(
feed_dict={X_state: [state]})
action = epsilon_greedy(q_values, step)
# The online DQN plays the game.
next_state, reward, done = perform_action(action)
# Save the result in the ReplayMemory.
replay_memory.append((state, action, reward, next_state, 1.0 - done))
state = next_state
# Compute statistics which help us monitor how training is going.
total_max_q += q_values.max()
game_length += 1
if done:
mean_max_q = total_max_q / game_length
total_max_q = 0.0
game_length = 0
# Only train after the warmup rounds and only every few rounds.
if iteration < training_start or iteration % training_interval != 0:
continue
# Sample memories from the reply memory.
X_state_val, X_action_val, rewards, X_next_state_val, continues = \
sample_memories(batch_size)
next_q_values = target_q_values.eval(
feed_dict={X_state: X_next_state_val})
max_next_q_values = np.max(next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * discount_rate * max_next_q_values
# Train the online DQN.
_, loss_val = sess.run([training_op, loss], feed_dict={
X_state: X_state_val,
X_action: X_action_val,
y: y_val,
})
# Regularly copy the online DQN to the target DQN.
if step % copy_steps == 0:
copy_online_to_target.run()
# Regularly save the model.
if step and step % save_steps == 0:
saver.save(sess, checkpoint_path)
In [115]:
preprocess_observations = []
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
def dqn_policy(obs, i):
if len(preprocess_observations) < 3:
preprocess_observations.append(preprocess_observation(obs))
if len(preprocess_observations) == 3:
state = combine_observations(preprocess_observations)
q_values = online_q_values.eval(
feed_dict={X_state: [state.reshape(80, 80, 1)]})
dqn_policy.cur_action = np.argmax(q_values)
return dqn_policy.cur_action
preprocess_observations[i % 3] = preprocess_observation(obs)
if i % 3 == 2:
state = combine_observations(preprocess_observations)
q_values = online_q_values.eval(
feed_dict={X_state: [state.reshape(80, 80, 1)]})
dqn_policy.cur_action = np.argmax(q_values)
return dqn_policy.cur_action
dqn_policy.cur_action = 0
html = run_episode(dqn_policy, n_max_steps=10000)
html
Special thanks to Dylan Cutler for contributing this solution!