In [0]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
In the previous tutorial, you covered the TensorFlow APIs for automatic differentiation—a basic building block for machine learning. In this tutorial, you will use the TensorFlow primitives introduced in the prior tutorials to do some simple machine learning.
TensorFlow also includes tf.keras
—a high-level neural network API that provides useful abstractions to reduce boilerplate and makes TensorFlow easier to use without sacrificing flexibility and performance. We strongly recommend the tf.Keras API for development. However, in this short tutorial you will learn how to train a neural network from first principles to establish a strong foundation.
In [0]:
import tensorflow as tf
Tensors in TensorFlow are immutable stateless objects. Machine learning models, however, must have changing state: as your model trains, the same code to compute predictions should behave differently over time (hopefully with a lower loss!). To represent this state, which needs to change over the course of your computation, you can choose to rely on the fact that Python is a stateful programming language:
In [0]:
# Using Python state
x = tf.zeros([10, 10])
x += 2 # This is equivalent to x = x + 2, which does not mutate the original
# value of x
print(x)
TensorFlow has stateful operations built-in, and these are often easier than using low-level Python representations for your state. Use tf.Variable
to represent weights in a model.
A tf.Variable
object stores a value and implicitly reads from this stored value. There are operations (tf.assign_sub
, tf.scatter_update
, etc.) that manipulate the value stored in a TensorFlow variable.
In [0]:
v = tf.Variable(1.0)
# Use Python's `assert` as a debugging statement to test the condition
assert v.numpy() == 1.0
# Reassign the value `v`
v.assign(3.0)
assert v.numpy() == 3.0
# Use `v` in a TensorFlow `tf.square()` operation and reassign
v.assign(tf.square(v))
assert v.numpy() == 9.0
Computations using tf.Variable
are automatically traced when computing gradients. For variables that represent embeddings, TensorFlow will do sparse updates by default, which are more computation and memory efficient.
A tf.Variable
is also a way to show a reader of your code that a piece of state is mutable.
Let's use the concepts you have learned so far—Tensor
, Variable
, and GradientTape
—to build and train a simple model. This typically involves a few steps:
Here, you'll create a simple linear model, f(x) = x * W + b
, which has two variables: W
(weights) and b
(bias). You'll synthesize data such that a well trained model would have W = 3.0
and b = 2.0
.
In [0]:
class Model(object):
def __init__(self):
# Initialize the weights to `5.0` and the bias to `0.0`
# In practice, these should be initialized to random values (for example, with `tf.random.normal`)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
In [0]:
def loss(target_y, predicted_y):
return tf.reduce_mean(tf.square(target_y - predicted_y))
In [0]:
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
Before training the model, visualize the loss value by plotting the model's predictions in red and the training data in blue:
In [0]:
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
With the network and training data, train the model using gradient descent to update the weights variable (W
) and the bias variable (b
) to reduce the loss. There are many variants of the gradient descent scheme that are captured in tf.train.Optimizer
—our recommended implementation. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of tf.GradientTape
for automatic differentiation and tf.assign_sub
for decrementing a value (which combines tf.assign
and tf.sub
):
In [0]:
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(outputs, model(inputs))
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
Finally, let's repeatedly run through the training data and see how W
and b
evolve.
In [0]:
model = Model()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(outputs, model(inputs))
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
This tutorial used tf.Variable
to build and train a simple linear model.
In practice, the high-level APIs—such as tf.keras
—are much more convenient to build neural networks. tf.keras
provides higher level building blocks (called "layers"), utilities to save and restore state, a suite of loss functions, a suite of optimization strategies, and more. Read the TensorFlow Keras guide to learn more.