In [1]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
|
TensorBoard’s Graphs dashboard is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected.
This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new tf.function
annotation.
In [2]:
# Load the TensorBoard notebook extension.
%load_ext tensorboard
In [3]:
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
In [4]:
import tensorboard
tensorboard.__version__
Out[4]:
In [5]:
# Clear any logs from previous runs
!rm -rf ./logs/
In [6]:
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Download and prepare the training data.
In [7]:
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
Before training, define the Keras TensorBoard callback, specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
In [8]:
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
Out[8]:
In [ ]:
%tensorboard --logdir logs
By default, TensorBoard displays the op-level graph. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.
Graphs are often very large, so you can manipulate the graph visualization:
You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details.
In addition to the execution graph, TensorBoard also displays a conceptual graph. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.
To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed Sequential node. Double-click the node to see the model’s structure:
The examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().
You may encounter a situation where you need to use the tf.function
annotation to "autograph", i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use TensorFlow Summary Trace API to log autographed functions for visualization in TensorBoard.
To use the Summary Trace API:
tf.function
tf.summary.trace_on()
immediately before your function call site.profiler=True
tf.summary.trace_export()
to save the log dataYou can then use TensorBoard to see how your function behaves.
In [10]:
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
In [ ]:
%tensorboard --logdir logs/func
You can now see the structure of your function as understood by TensorBoard. Click on the "Profile" radiobutton to see CPU and memory statistics.