Use tensorboard to visualize tensorflow graph, plot quantitative metrics about the execution of your graph, and show addinal data like images that pass through it.


In [5]:
import tensorboard as tb
import tensorflow as tf

Tensorboard operates by reading Tensorflow event files, which contain summary data that you genertae when running Tensorflow.
First create the Tensorflow graph and decide which nodes you would like to annotate
To annotate a node use 'tf.summary.scalar'.
To visualize the distributions of activations coming from a particular layer use 'tf.summary.histogram'.
Now to generate all your summary nodes use 'tf.summary.maerge_all', this steps is used so as to run all the nodes.
Now just run the 'tf.summary.FileWriter' to write this summary operation to disk


In [6]:
def variable_summaries (var):
    with tf.name_scope ('summaries'):
        mean = tf.reduce_mean(var)
        tf.summary.scalar('mean', mean)
        with tf.name_scope('sttdev'):
            sttdev = tf.sqrt(tf.reduce_mean(tf.square(var-mean)))
        tf.summary.scalar('sttdev', sttdev)
        tf.summary.scalar('max', tf.reduce_max(var))
        tf.summary.scalar('min', tf.reduce_min(var))
        tf.summary.histogram('histogram', var)

In [3]:
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
    # Adding a name scope ensures logical grouping of layers in the graph
    with tf.name_scope(layer_name):
        with tf.name_scope('weights'):
            weights = weight_variable([input_dim, output_dim])
            variable_summaries(weights)
        with tf.name_scope('biases'):
            biases = bias_variable([output_dim])
            variable_summaries(biases)
        with tf.name_scope('Wx_plus_b'):
            preactivate = tf.matmul(input_tensor, weights) + biases
            tf.summary.histogram('pre_activations', preactivate)
        activations = act(preactivate, name='activation')
        tf.summary.histogram('activations', activations)
        return activations

In [ ]:
hidden1 = nn_layer(x, 784, 500, 'layer1')

with tf.name_scope('dropout'):
    keep_prb = tf.placeholder(tf.float32)
    tf.summary.scalar('dropout_keep_probability', keep_prob)
    dropped = tf.nn.droput(hidden1, keep_prob)

y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identify)

with tf.name_scope('cross_entropy'):
    diff = tf.nn.softmax_cross_entropy_with_logits(target=y_, logits=y)

In [ ]:
# To launch tensorboard use
tensorboard --logdir=path/to/log-directory
# logdir points to the directory in which FileWriter is stored

In [ ]:

Graph Visualization

A Tensorflow program would contain millions of nodes. It would be difficult to visualize them in the starting. So we use scoping, in which variable names are scoped and the visualiation uses this information to define a hiearachy on the nodes in the graph.


In [8]:
with tf.name_scope('hidden') as scope:
    a = tf.constant(5, name='alpha')
    W = tf.Variable(tf.random_uniform([1,2], -1.0, 1.0), name = 'weights')
    b = tf.Variable(tf.zeros([1]), name='biases')

In [9]:
# Remomber that better the name scopes better is the visualization

Tensorflow graphs have two types of connections namely data dependencies and control dependenceis.
Data Dependencies show the flow of tensors between two ops and are shown as solid arrows.
Control Dependencies use dotted lines.


In [ ]:
-