**Outline**
**Clues in TensorFlow**
**What does TensorFlow actually do?**
TensorFlow provides primitives for defining functions on tensors and automatically computing their derivatives
**What is a Tensor?**
Formally, tensors are multilinear maps from vector spaces to the real numbers (V vector space, and V * dual space)
$$ f: V^*\times ... V^* \times V \times ... \times V \rightarrow \mathbb{\textbf{R}} $$A scalar is a tensor $ \quad f: \mathbb{\textbf{R}} \rightarrow \mathbb{\textbf{R}},\quad f(e_1) = c $
A vector is a tensor $ \quad f: \mathbb{\textbf{R}^n} \rightarrow \mathbb{\textbf{R}},\quad f(e_i) = v_i $
A matrix is a tensor $ \quad f: \mathbb{\textbf{R}^n \times \textbf{R}^m} \rightarrow \mathbb{\textbf{R}},\quad f(e_i, e_j) = A_{ij} $
Common to have fixed basis, so a tensor can be represented as a multidimensional array of numbers.
**TensorFlow Computation Graph**
“TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.” TensorFlow docs
All computations add nodes to global default graph docs
Importing TensorFlow the same as other Python Libraries
In [ ]:
import tensorflow as tf
**Phase I: Construction Phase**
TensorFlow defines a computation graph that has no numerical value until evaluated!
Here are some clues may be you should use in this phase
In [ ]:
constant1 = tf.constant(5.0)
constant2 = tf.zeros((2,2)); constant3 = tf.ones((2,2))
Variables
“When you train a model you use variables to hold and update parameters. Variables are in-memory buffers containing tensors” TensorFlow docs
In [ ]:
w1 = tf.Variable(tf.zeros((2,2)), name="wights")
w2 = tf.Variable(tf.random_normal((2,2)), name="random_wights")
Ubdating Variable
In [ ]:
new_w2 = tf.add(w2, w1)
w1 = tf.assign(w1, new_w2)
Input external data into TensorFlow
In [ ]:
a = np.zeros((3,3))
ta = tf.convert_to_tensor(a)
tf.convert_to_tensor() is convenient, but doesn’t scale.
tf.placeholder variables scale and they dummy nodes that provide entry points for data to computational graph
In [ ]:
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
Some useful functions
In [ ]:
a.get_shape()
tf.reshape(a, (1,4))
tf.reduce_sum(a,reduction_indices=[1])
**Phase II: Execution Phase**
“A Session object encapsulates the environment in which Tensor objects are evaluated” TensorFlow Docs
In [ ]:
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session() as sess:
print(sess.run(c))
Two Convenient Syntactics to evaluate the
In [ ]:
tf.InteractiveSession()
just convenient syntactic sugar for keeping a default session open in ipython.
In [ ]:
object.eval()
just syntactic sugar for sess.run(object) in the currently active session!
In [ ]:
tf.InteractiveSession()
c.eval()
Make the input in Execution Phase
A feed_dict is a python dictionary mapping from tf. placeholder vars (or their names) to data (numpy arrays, lists, etc.).
In [ ]:
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))