This notebook demonstrates using reinforcement learning to train an agent to play Pong.

The first step is to create an `Environment`

that implements this task. Fortunately,
OpenAI Gym already provides an implementation of Pong (and many other tasks appropriate
for reinforcement learning). DeepChem's `GymEnvironment`

class provides an easy way to
use environments from OpenAI Gym. We could just use it directly, but in this case we
subclass it and preprocess the screen image a little bit to make learning easier.

This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.

To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. To install `gym`

you should also use `pip install 'gym[atari]'`

(We need the extra modifier since we'll be using an atari game). We'll add this command onto our usual Colab installation commands for you

```
In [1]:
```%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')

```
```

```
In [2]:
```!pip install 'gym[atari]'

```
```

```
In [0]:
```import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()

Next we create a network to implement the policy. We begin with two convolutional layers to process the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game logic. We also add a small Gated Recurrent Unit. That gives the network a little bit of memory, so it can keep track of which way the ball is moving.

We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the network's outputs. One computes the action probabilities, and the other computes an estimate of the state value function.

We also provide an input for the initial state of the GRU, and returned its final state at the end. This is required by the learning algorithm

```
In [0]:
```import tensorflow as tf
from tensorflow.keras.layers import Input, Concatenate, Conv2D, Dense, Flatten, GRU, Reshape
class PongPolicy(dc.rl.Policy):
def __init__(self):
super(PongPolicy, self).__init__(['action_prob', 'value', 'rnn_state'], [np.zeros(16)])
def create_model(self, **kwargs):
state = Input(shape=(80, 80))
rnn_state = Input(shape=(16,))
conv1 = Conv2D(16, kernel_size=8, strides=4, activation=tf.nn.relu)(Reshape((80, 80, 1))(state))
conv2 = Conv2D(32, kernel_size=4, strides=2, activation=tf.nn.relu)(conv1)
dense = Dense(256, activation=tf.nn.relu)(Flatten()(conv2))
gru, rnn_final_state = GRU(16, return_state=True, return_sequences=True)(
Reshape((-1, 256))(dense), initial_state=rnn_state)
concat = Concatenate()([dense, Reshape((16,))(gru)])
action_prob = Dense(env.n_actions, activation=tf.nn.softmax)(concat)
value = Dense(1)(concat)
return tf.keras.Model(inputs=[state, rnn_state], outputs=[action_prob, value, rnn_final_state])
policy = PongPolicy()

```
In [5]:
```from deepchem.models.optimizers import Adam
a3c = dc.rl.A3C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))

```
```

```
In [6]:
```# Change this to train as many steps as you have patience for.
a3c.fit(1000)

```
```

Let's watch it play and see how it does!

```
In [0]:
```# This code doesn't work well on Colab
env.reset()
while not env.terminated:
env.env.render()
env.step(a3c.select_action(env.state))

Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:

This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.

The DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!