Chapter 16 – Reinforcement Learning

This notebook contains all the sample code and solutions to the exersices in chapter 16.

# Setup

First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:

In [1]:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals

# Common imports
import numpy as np
import os
import sys

# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)

# To plot pretty figures and animations
%matplotlib nbagg
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12

# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rl"

def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)

Note: there may be minor differences between the output of this notebook and the examples shown in the book. You can safely ignore these differences. They are mainly due to the fact that most of the environments provided by OpenAI gym have some randomness.

# Introduction to OpenAI gym

In this notebook we will be using OpenAI gym, a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning agents to interact with. Let's start by importing gym:

In [2]:
import gym

Next we will load the MsPacman environment, version 0.

In [3]:
env = gym.make('MsPacman-v0')

Let's initialize the environment by calling is reset() method. This returns an observation:

In [4]:
obs = env.reset()

Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels: Red, Green and Blue). In other environments it may return different objects, as we will see later.

In [5]:
obs.shape

Out[5]:
(210, 160, 3)

An environment can be visualized by calling its render() method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set mode="rgb_array" to get an image of the environment as a NumPy array:

In [6]:
img = env.render(mode="rgb_array")

Let's plot this image:

In [7]:
plt.figure(figsize=(5,4))
plt.imshow(img)
plt.axis("off")
save_fig("MsPacman")
plt.show()

Saving figure MsPacman

Welcome back to the 1980s! :)

In this environment, the rendered image is simply equal to the observation (but in many environments this is not the case):

In [8]:
(img == obs).all()

Out[8]:
True

Let's create a little helper function to plot an environment:

In [9]:
def plot_environment(env, figsize=(5,4)):
plt.close()  # or else nbagg sometimes plots in the previous cell
plt.figure(figsize=figsize)
img = env.render(mode="rgb_array")
plt.imshow(img)
plt.axis("off")
plt.show()

Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:

In [10]:
env.action_space

Out[10]:
Discrete(9)

Discrete(9) means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left).

Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps:

In [11]:
env.reset()
for step in range(110):
env.step(3) #left
for step in range(40):
env.step(8) #lower-left

Where are we now?

In [12]:
plot_environment(env)

The step() function actually returns several important objects:

In [13]:
obs, reward, done, info = env.step(0)

The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image:

In [14]:
obs.shape

Out[14]:
(210, 160, 3)

The environment also tells the agent how much reward it got during the last step:

In [15]:
reward

Out[15]:
0.0

When the game is over, the environment returns done=True:

In [16]:
done

Out[16]:
False

Finally, info is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating).

In [17]:
info

Out[17]:
{'ale.lives': 3}

Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame:

In [18]:
frames = []

n_max_steps = 1000
n_change_steps = 10

obs = env.reset()
for step in range(n_max_steps):
img = env.render(mode="rgb_array")
frames.append(img)
if step % n_change_steps == 0:
action = env.action_space.sample() # play randomly
obs, reward, done, info = env.step(action)
if done:
break

Now show the animation (it's a bit jittery within Jupyter):

In [19]:
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,

def plot_animation(frames, repeat=False, interval=40):
plt.close()  # or else nbagg sometimes plots in the previous cell
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval)

In [20]:
video = plot_animation(frames)
plt.show()

Once you have finished playing with an environment, you should close it to free up resources:

In [21]:
env.close()

To code our first learning agent, we will be using a simpler environment: the Cart-Pole.

# A simple environment: the Cart-Pole

The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.

In [22]:
env = gym.make("CartPole-v0")

In [23]:
obs = env.reset()

In [24]:
obs

Out[24]:
array([ 0.0230852 ,  0.03446783, -0.04351584, -0.04777627])

The observation is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity. Let's render the environment... unfortunately we need to fix an annoying rendering issue first.

## Fixing the rendering issue

Some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify the rgb_array mode. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like Xvfb. You can start Jupyter using the xvfb-run command:

$xvfb-run -s "-screen 0 1400x900x24" jupyter notebook If Jupyter is running on a headless server but you don't want to worry about Xvfb, then you can just use the following rendering function for the Cart-Pole: In [25]: from PIL import Image, ImageDraw try: from pyglet.gl import gl_info openai_cart_pole_rendering = True # no problem, let's use OpenAI gym's rendering function except Exception: openai_cart_pole_rendering = False # probably no X server available, let's use our own rendering function def render_cart_pole(env, obs): if openai_cart_pole_rendering: # use OpenAI gym's rendering function return env.render(mode="rgb_array") else: # rendering for the cart pole environment (in case OpenAI gym can't do it) img_w = 600 img_h = 400 cart_w = img_w // 12 cart_h = img_h // 15 pole_len = img_h // 3.5 pole_w = img_w // 80 + 1 x_width = 2 max_ang = 0.2 bg_col = (255, 255, 255) cart_col = 0x000000 # Blue Green Red pole_col = 0x669acc # Blue Green Red pos, vel, ang, ang_vel = obs img = Image.new('RGB', (img_w, img_h), bg_col) draw = ImageDraw.Draw(img) cart_x = pos * img_w // x_width + img_w // x_width cart_y = img_h * 95 // 100 top_pole_x = cart_x + pole_len * np.sin(ang) top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang) draw.line((0, cart_y, img_w, cart_y), fill=0) draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) # draw cart draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) # draw pole return np.array(img) def plot_cart_pole(env, obs): plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") plt.show() In [26]: plot_cart_pole(env, obs) Now let's look at the action space: In [27]: env.action_space Out[27]: Discrete(2) Yep, just two possible actions: accelerate towards the left or towards the right. Let's push the cart left until the pole falls: In [28]: obs = env.reset() while True: obs, reward, done, info = env.step(0) if done: break In [29]: plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") save_fig("cart_pole_plot") Saving figure cart_pole_plot In [30]: img.shape Out[30]: (400, 600, 3) Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead: In [31]: obs = env.reset() while True: obs, reward, done, info = env.step(1) if done: break In [32]: plot_cart_pole(env, obs) Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a policy for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do. # A simple hard-coded policy Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and vice versa. Let's see if that works: In [33]: frames = [] n_max_steps = 1000 n_change_steps = 10 obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) # hard-coded policy position, velocity, angle, angular_velocity = obs if angle < 0: action = 0 else: action = 1 obs, reward, done, info = env.step(action) if done: break In [34]: video = plot_animation(frames) plt.show() Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that! # Neural Network Policies Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will first estimate a probability for each action, then select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability p of the action 0 (left), and of course the probability of action 1 (right) will be 1 - p. Note: instead of using the fully_connected() function from the tensorflow.contrib.layers module (as in the book), we now use the dense() function from the tf.layers module, which did not exist when this chapter was written. This is preferable because anything in contrib may change or be deleted without notice, while tf.layers is part of the official API. As you will see, the code is mostly the same. The main differences relevant to this chapter are: • the _fn suffix was removed in all the parameters that had it (for example the activation_fn parameter was renamed to activation). • the weights parameter was renamed to kernel, • the default activation is None instead of tf.nn.relu In [35]: import tensorflow as tf # 1. Specify the network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # it's a simple task, we don't need more than this n_outputs = 1 # only outputs the probability of accelerating left initializer = tf.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer) outputs = tf.layers.dense(hidden, n_outputs, activation=tf.nn.sigmoid, kernel_initializer=initializer) # 3. Select a random action based on the estimated probabilities p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer() In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state. You may wonder why we are picking a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between exploring new actions and exploiting the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried. Let's randomly initialize this policy neural network and use it to play one game: In [36]: n_max_steps = 1000 frames = [] with tf.Session() as sess: init.run() obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() Now let's look at how well this randomly initialized policy network performed: In [37]: video = plot_animation(frames) plt.show() Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right. The following code defines the same neural network but we add the target probabilities y, and the training operations (cross_entropy, optimizer and training_op): In [38]: import tensorflow as tf reset_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) y = tf.placeholder(tf.float32, shape=[None, n_outputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer) logits = tf.layers.dense(hidden, n_outputs) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cross_entropy) init = tf.global_variables_initializer() saver = tf.train.Saver() We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done. In [39]: n_environments = 10 n_iterations = 1000 envs = [gym.make("CartPole-v0") for _ in range(n_environments)] observations = [env.reset() for env in envs] with tf.Session() as sess: init.run() for iteration in range(n_iterations): target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) # if angle<0 we want proba(left)=1., or else proba(left)=0. action_val, _ = sess.run([action, training_op], feed_dict={X: np.array(observations), y: target_probas}) for env_index, env in enumerate(envs): obs, reward, done, info = env.step(action_val[env_index][0]) observations[env_index] = obs if not done else env.reset() saver.save(sess, "./my_policy_net_basic.ckpt") for env in envs: env.close() In [40]: def render_policy_net(model_path, action, X, n_max_steps = 1000): frames = [] env = gym.make("CartPole-v0") obs = env.reset() with tf.Session() as sess: saver.restore(sess, model_path) for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() return frames In [41]: frames = render_policy_net("./my_policy_net_basic.ckpt", action, X) video = plot_animation(frames) plt.show() INFO:tensorflow:Restoring parameters from ./my_policy_net_basic.ckpt Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own. # Policy Gradients To train this neural network we will need to define the target probabilities y. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the credit assignment problem. The Policy Gradients algorithm tackles this problem by first playing multiple games, then making the actions in good games slightly more likely, while actions in bad games are made slightly less likely. First we play, then we go back and think about what we did. In [42]: import tensorflow as tf reset_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer) logits = tf.layers.dense(hidden, n_outputs) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) y = 1. - tf.to_float(action) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cross_entropy) gradients = [grad for grad, variable in grads_and_vars] gradient_placeholders = [] grads_and_vars_feed = [] for grad, variable in grads_and_vars: gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape()) gradient_placeholders.append(gradient_placeholder) grads_and_vars_feed.append((gradient_placeholder, variable)) training_op = optimizer.apply_gradients(grads_and_vars_feed) init = tf.global_variables_initializer() saver = tf.train.Saver() In [43]: def discount_rewards(rewards, discount_rate): discounted_rewards = np.zeros(len(rewards)) cumulative_rewards = 0 for step in reversed(range(len(rewards))): cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate discounted_rewards[step] = cumulative_rewards return discounted_rewards def discount_and_normalize_rewards(all_rewards, discount_rate): all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards] flat_rewards = np.concatenate(all_discounted_rewards) reward_mean = flat_rewards.mean() reward_std = flat_rewards.std() return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards] In [44]: discount_rewards([10, 0, -50], discount_rate=0.8) Out[44]: array([-22., -40., -50.]) In [45]: discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8) Out[45]: [array([-0.28435071, -0.86597718, -1.18910299]), array([ 1.26665318, 1.0727777 ])] In [46]: env = gym.make("CartPole-v0") n_games_per_update = 10 n_max_steps = 1000 n_iterations = 250 save_iterations = 10 discount_rate = 0.95 with tf.Session() as sess: init.run() for iteration in range(n_iterations): print("\rIteration: {}".format(iteration), end="") all_rewards = [] all_gradients = [] for game in range(n_games_per_update): current_rewards = [] current_gradients = [] obs = env.reset() for step in range(n_max_steps): action_val, gradients_val = sess.run([action, gradients], feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) current_rewards.append(reward) current_gradients.append(gradients_val) if done: break all_rewards.append(current_rewards) all_gradients.append(current_gradients) all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate) feed_dict = {} for var_index, gradient_placeholder in enumerate(gradient_placeholders): mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index, rewards in enumerate(all_rewards) for step, reward in enumerate(rewards)], axis=0) feed_dict[gradient_placeholder] = mean_gradients sess.run(training_op, feed_dict=feed_dict) if iteration % save_iterations == 0: saver.save(sess, "./my_policy_net_pg.ckpt") Iteration: 249 In [47]: env.close() In [48]: frames = render_policy_net("./my_policy_net_pg.ckpt", action, X, n_max_steps=1000) video = plot_animation(frames) plt.show() INFO:tensorflow:Restoring parameters from ./my_policy_net_pg.ckpt # Markov Chains In [49]: transition_probabilities = [ [0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3 [0.0, 0.0, 0.9, 0.1], # from s1 to ... [0.0, 1.0, 0.0, 0.0], # from s2 to ... [0.0, 0.0, 0.0, 1.0], # from s3 to ... ] n_max_steps = 50 def print_sequence(start_state=0): current_state = start_state print("States:", end=" ") for step in range(n_max_steps): print(current_state, end=" ") if current_state == 3: break current_state = np.random.choice(range(4), p=transition_probabilities[current_state]) else: print("...", end="") print() for _ in range(10): print_sequence() States: 0 0 3 States: 0 1 2 1 2 1 2 1 2 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 States: 0 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 States: 0 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 ... States: 0 0 3 States: 0 0 0 1 2 1 2 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 # Markov Decision Process In [50]: transition_probabilities = [ [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # in s0, if action a0 then proba 0.7 to state s0 and 0.3 to state s1, etc. [[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]], [None, [0.8, 0.1, 0.1], None], ] rewards = [ [[+10, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, -50]], [[0, 0, 0], [+40, 0, 0], [0, 0, 0]], ] possible_actions = [[0, 1, 2], [0, 2], [1]] def policy_fire(state): return [0, 2, 1][state] def policy_random(state): return np.random.choice(possible_actions[state]) def policy_safe(state): return [0, 0, 1][state] class MDPEnvironment(object): def __init__(self, start_state=0): self.start_state=start_state self.reset() def reset(self): self.total_rewards = 0 self.state = self.start_state def step(self, action): next_state = np.random.choice(range(3), p=transition_probabilities[self.state][action]) reward = rewards[self.state][action][next_state] self.state = next_state self.total_rewards += reward return self.state, reward def run_episode(policy, n_steps, start_state=0, display=True): env = MDPEnvironment() if display: print("States (+rewards):", end=" ") for step in range(n_steps): if display: if step == 10: print("...", end=" ") elif step < 10: print(env.state, end=" ") action = policy(env.state) state, reward = env.step(action) if display and step < 10: if reward: print("({})".format(reward), end=" ") if display: print("Total rewards =", env.total_rewards) return env.total_rewards for policy in (policy_fire, policy_random, policy_safe): all_totals = [] print(policy.__name__) for episode in range(1000): all_totals.append(run_episode(policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print() policy_fire States (+rewards): 0 (10) 0 (10) 0 1 (-50) 2 2 2 (40) 0 (10) 0 (10) 0 (10) ... Total rewards = 210 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 1 (-50) 2 2 (40) 0 (10) ... Total rewards = 70 States (+rewards): 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 70 States (+rewards): 0 1 (-50) 2 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 ... Total rewards = -10 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) ... Total rewards = 290 Summary: mean=121.1, std=129.333766, min=-330, max=470 policy_random States (+rewards): 0 1 (-50) 2 1 (-50) 2 (40) 0 1 (-50) 2 2 (40) 0 ... Total rewards = -60 States (+rewards): 0 (10) 0 0 0 0 0 (10) 0 0 0 (10) 0 ... Total rewards = -30 States (+rewards): 0 1 1 (-50) 2 (40) 0 0 1 1 1 1 ... Total rewards = 10 States (+rewards): 0 (10) 0 (10) 0 0 0 0 1 (-50) 2 (40) 0 0 ... Total rewards = 0 States (+rewards): 0 0 (10) 0 1 (-50) 2 (40) 0 0 0 0 (10) 0 (10) ... Total rewards = 40 Summary: mean=-22.1, std=88.152740, min=-380, max=200 policy_safe States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 States (+rewards): 0 (10) 0 (10) 0 (10) 0 1 1 1 1 1 1 ... Total rewards = 30 States (+rewards): 0 (10) 0 1 1 1 1 1 1 1 1 ... Total rewards = 10 States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 Summary: mean=22.3, std=26.244312, min=0, max=170 # Q-Learning Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy). In [51]: n_states = 3 n_actions = 3 n_steps = 20000 alpha = 0.01 gamma = 0.99 exploration_policy = policy_random q_values = np.full((n_states, n_actions), -np.inf) for state, actions in enumerate(possible_actions): q_values[state][actions]=0 env = MDPEnvironment() for step in range(n_steps): action = exploration_policy(env.state) state = env.state next_state, reward = env.step(action) next_value = np.max(q_values[next_state]) # greedy policy q_values[state, action] = (1-alpha)*q_values[state, action] + alpha*(reward + gamma * next_value) In [52]: def optimal_policy(state): return np.argmax(q_values[state]) In [53]: q_values Out[53]: array([[ 39.13508139, 38.88079412, 35.23025716], [ 18.9117071 , -inf, 20.54567816], [ -inf, 72.53192111, -inf]]) In [54]: all_totals = [] for episode in range(1000): all_totals.append(run_episode(optimal_policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print() States (+rewards): 0 (10) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 (10) ... Total rewards = 230 States (+rewards): 0 (10) 0 (10) 0 (10) 0 1 (-50) 2 2 1 (-50) 2 (40) 0 (10) ... Total rewards = 90 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 170 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 220 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) ... Total rewards = -50 Summary: mean=125.6, std=127.363464, min=-290, max=500 # Learning to Play MsPacman Using the DQN Algorithm Warning: Unfortunately, the first version of the book contained two important errors in this section. 1. The actor DQN and critic DQN should have been named online DQN and target DQN respectively. Actor-critic algorithms are a distinct class of algorithms. 2. The online DQN is the one that learns and is copied to the target DQN at regular intervals. The target DQN's only role is to estimate the next state's Q-Values for each possible action. This is needed to compute the target Q-Values for training the online DQN, as shown in this equation:$y(s,a) = \text{r} + \gamma . \underset{a'}{\max} \, Q_\text{target}(s', a')$•$y(s,a)$is the target Q-Value to train the online DQN for the state-action pair$(s, a)$. •$r$is the reward actually collected after playing action$a$in state$s$. •$\gamma$is the discount rate. •$s'$is the state actually reached after played action$a$in state$s$. •$a'$is one of the possible actions in state$s'$. •$Q_\text{target}(s', a')$is the target DQN's estimate of the Q-Value of playing action$a'$while in state$s'\$.

I hope these errors did not affect you, and if they did, I sincerely apologize.

## Creating the MsPacman environment

In [55]:
env = gym.make("MsPacman-v0")
obs = env.reset()
obs.shape

Out[55]:
(210, 160, 3)

In [56]:
env.action_space

Out[56]:
Discrete(9)

## Preprocessing

Preprocessing the images is optional but greatly speeds up training.

In [57]:
mspacman_color = 210 + 164 + 74

def preprocess_observation(obs):
img = obs[1:176:2, ::2] # crop and downsize
img = img.sum(axis=2) # to greyscale
img[img==mspacman_color] = 0 # Improve contrast
img = (img // 3 - 128).astype(np.int8) # normalize from -128 to 127
return img.reshape(88, 80, 1)

img = preprocess_observation(obs)

Note: the preprocess_observation() function is slightly different from the one in the book: instead of representing pixels as 64-bit floats from -1.0 to 1.0, it represents them as signed bytes (from -128 to 127). The benefit is that the replay memory will take up roughly 8 times less RAM (about 6.5 GB instead of 52 GB). The reduced precision has no visible impact on training.

In [58]:
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Original observation (160×210 RGB)")
plt.imshow(obs)
plt.axis("off")
plt.subplot(122)
plt.title("Preprocessed observation (88×80 greyscale)")
plt.imshow(img.reshape(88, 80), interpolation="nearest", cmap="gray")
plt.axis("off")
save_fig("preprocessing_plot")
plt.show()