Introduction to TensorFlow

Based on the official TensorFlow Tutorial

Kaivalya Rawal and Rohan James

Agenda

  • Synopsis
  • Installation and Import
  • Computational Graph Nodes
  • Training
  • High Level API

Synopsis

  • Google library for 'tensor' operations
  • Applications beyond ML
  • Most basic unit: matrix-like Tensors
  • Graph of transformations on defined n-rank Tensors: 2 step process of building the computational graph, and then running it
  • TensorBoard visualizations

Installation and Import

  • Install steps on TensorFlow Website, or simple pip install in a virtual environment
  • TensorBoard comes built-in
  • Activate to access TensorBoard with logdirectory

In [1]:
import tensorflow as tf

In [2]:
3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]


Out[2]:
[[[1.0, 2.0, 3.0]], [[7.0, 8.0, 9.0]]]

Nodes

  • Make up the computational graph
  • Each node takes in zero or more tensors as input, and gives one tensor as output
  • Simplest node - constant. Zero in, Single constant out.
  • Addition
  • Placeholder
  • Variable

In [3]:
n1 = tf.constant(2.0, tf.float32)
n2 = tf.constant(4.0) # type?

print(n1)
print(n2)


Tensor("Const:0", shape=(), dtype=float32)
Tensor("Const_1:0", shape=(), dtype=float32)

This is just the build step. To evaluate the nodes, run the graph within a session.


In [4]:
sess = tf.Session()
print(sess.run([n1, n2]))


[2.0, 4.0]

additon node


In [5]:
n3 = tf.add(n1, n2)
print('Node 3', n3)
print('sess.run(n3)', sess.run(n3))


Node 3 Tensor("Add:0", shape=(), dtype=float32)
sess.run(n3) 6.0

In [6]:
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))


7.5
[ 3.  7.]

Other operations


In [7]:
add_and_half = adder_node / 2
print(sess.run(add_and_half, {a:0.5, b:-1.5}))


-0.5

Tunable variables:


In [8]:
m = tf.Variable([.3], tf.float32)
c = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = m * x + c

Unlike constants - whose values never change, variables aren't initialized by default.


In [9]:
init = tf.global_variables_initializer()
sess.run(init)

Evaluating the lines ordinate for various x values simultaneously:


In [10]:
print(sess.run(linear_model, {x:[1,2,3,0,-5,20]}))


[ 0.          0.30000001  0.60000002 -0.30000001 -1.79999995  5.69999981]

In [11]:
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))


23.66

In [12]:
n_slope = tf.assign(m, [-0.9])
n_const = tf.assign(c, [1.5])
sess.run([n_slope, n_const])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))


2.3

Training

  • Gradient Descent optimization


In [21]:
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})

print(sess.run([m, m]))


[array([-0.9999969], dtype=float32), array([-0.9999969], dtype=float32)]

Experimenting with different (learning rate) step values


High Level API

  • (still figuring it out)
  • Linear Regression Example

In [23]:
import numpy as np

In [24]:
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]

In [25]:
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)


INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_summary_steps': 100, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, '_save_checkpoints_steps': None, '_environment': 'local', '_task_id': 0, '_keep_checkpoint_every_n_hours': 10000, '_num_worker_replicas': 0, '_tf_random_seed': None, '_keep_checkpoint_max': 5, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f402027c7b8>, '_save_checkpoints_secs': 600, '_evaluation_master': '', '_model_dir': None, '_num_ps_replicas': 0, '_master': '', '_is_chief': True, '_task_type': None}
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpxia88ibd

In [26]:
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4, num_epochs=1000)

In [29]:
score = estimator.evaluate(input_fn=input_fn)


WARNING:tensorflow:Rank of input Tensor (1) should be the same as output_rank (2) for column. Will attempt to expand dims. It is highly recommended that you resize your input, as this behavior may change.
WARNING:tensorflow:From /home/kaivalya/Documents/Work-Ongoing/SenSei/venv/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:615: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
INFO:tensorflow:Starting evaluation at 2017-05-30-09:20:45
INFO:tensorflow:Restoring parameters from /tmp/tmpxia88ibd/model.ckpt-1000
INFO:tensorflow:Finished evaluation at 2017-05-30-09:20:45
INFO:tensorflow:Saving dict for global step 1000: global_step = 1000, loss = 3.82239e-08
WARNING:tensorflow:Skipping summary for global_step, must be a float or np.float32.

In [30]:
print(score)


{'loss': 3.8223888e-08, 'global_step': 1000}


In [ ]: