02 TensorFlow Basics

This is the part of basics of Tensorflow playground. These ilustrations are inspired from lecture notes of CS 20SI: Tensorflow for Deep Learning Research.I do not own this code. For more info visit http://web.stanford.edu/class/cs20si/index.html


In [1]:
#import
import tensorflow as tf

sess = tf.Session()

most of the operations are same as numpy


In [2]:
zer = tf.zeros([2,3], tf.int32)
print(sess.run(zer))
print("shape", zer.shape)


[[0 0 0]
 [0 0 0]]
shape (2, 3)

In [3]:
# we can get tensors of zeros / ones like another tensor 
input_tensor = [[0,], [1], [2], [3]]
zer_like = tf.zeros_like(input_tensor, dtype=None, name=None, optimize=True)
print("zero_like", sess.run(zer_like))
# same goes for  tf.ones_like()


zero_like [[0]
 [0]
 [0]
 [0]]

In [4]:
# we can create a tensor filled with user given values using tf.fill(dims,valu,name=None)
print(sess.run(tf.fill([3,4], 8.0)))


[[ 8.  8.  8.  8.]
 [ 8.  8.  8.  8.]
 [ 8.  8.  8.  8.]]

In [5]:
# You can create sequences of evenly spaced values begining at start ending at stop
# tf.linspace(start, stop, num, name=None)
linsp = tf.linspace(10.0, 13.0, 4, name='linsp')
print("linspace    ", sess.run(linsp))

# tf.range(start, limit=None, delta=1, dtype=None, name='range')
# stop number is no included
print("range     ", sess.run(tf.range(3, 18, 3)))
print("-----------")
print("NOTE: important thing to note here is tensorflow sequences are not iterable(like in numpy)")
#for _ in np.linspace(0,10,4) # works fine
#for _ in tf.linspace(0,10,4) # TypeError("'Tensor' object is not iterable.")

#for _ in range(4) # OK
#for _ in tf.range(4) # TypeError("'Tensor' object is not iterable.")


linspace     [ 10.  11.  12.  13.]
range      [ 3  6  9 12 15]
-----------
NOTE: important thing to note here is tensorflow sequences are not iterable(like in numpy)

Most of the times, you can use TensorFlow types and NumPy types interchangeably

  • Note 1 : There is a catch here for string data types. For numeric and boolean types, TensorFlow and NumPy dtypes match down the line. However, tf.string does not have an exact match in NumPy due to the way NumPy handles strings. TensorFlow can still import string arrays from NumPy perfectly fine -- just don’t specify a dtype in NumPy!

  • Note 2 : Both TensorFlow and NumPy are n-d array libraries. NumPy supports ndarray, but doesn’t offer methods to create tensor functions and automatically compute derivatives, nor GPU support. So TensorFlow still wins!

  • Note 3: Using Python types to specify TensorFlow objects is quick and easy, and it is useful for prototyping ideas. However, there is an important pitfall in doing it this way. Python types lack the ability to explicitly state the data type, but TensorFlow’s data types are more specific. For example, all integers are the same type, but TensorFlow has 8-bit, 16-bit, 32-bit, and 64-bit integers available.

  • Therefore, if you use a Python type, TensorFlow has to infer which data type you mean. It’s possible to convert the data into the appropriate type when you pass it into TensorFlow, but certain data types still may be difficult to declare correctly, such as complex numbers. Because of this, it is common to create hand-defined Tensor objects as NumPy arrays.

  • So, always use TensorFlow types whenever possible, because both TensorFlow and NumPy can evolve to a point that such compatibility no longer exists.

more on constants, variables

To declare a variable, you create an instance of the class tf.Variable. Note that it’s tf.constant but tf.Variable and not tf.variable because tf.constant is an op, while tf.Variable is a class.

The main difference between a constant and a variable:

  • A constant is constant. A variable can be assigned to, its value can be changed.
  • A constant's value is stored in the graph and its value is replicated wherever the graph is loaded. A variable is stored separately, and may live on a parameter server.

Point 2 basically means that constants are stored in the graph definition. When constants are memory expensive, it will be slow each time you have to load the graph. To see the graph’s definition and what’s stored in the graph’s definition, simply print out the graph’s protobuf. Protobuf stands for protocol buffer , “Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler.”


In [6]:
# We can create a tf.Variable w as 784 x 10 tensor filled with zeros

w = tf.Variable(tf.zeros([784,10])) 
print("shape of w", w.shape)


shape of w (784, 10)

tf.Variable holds several ops:

x = tf.Variable(...) ==> x.initializer # init x.value() # read op x.assign(...) # write op x.assign_add(...) .....and more


In [7]:
# we need to initialize the variables before using them

init = tf.global_variables_initializer()
with tf.Session() as sess:
   sess.run(init)


# Note that you use sess.run() to run the initializer, not fetching any value.

In [8]:
# To initialize only a subset of variables, you use tf.variables_initializer() with a list of variables you want to initialize:
a = tf.Variable(1, dtype=tf.int32)
b = tf.Variable(0.5, dtype=tf.float32)
c = tf.Variable(10.3, dtype=tf.float64)

init = tf.variables_initializer([a,b], name="init")

with tf.Session() as sess:
    sess.run(init)
    print("a ",sess.run(a))
    print("b ",sess.run(b))
    # print("c ",sess.run(c)) # throws error "Attempting to use uninitialized value Variable"

# You can also initialize each variable separately using  tf.Variable.initializer 

W = tf.Variable(tf.zeros ([ 784 , 10 ])) #create variable W as 784 x 10 tensor, filled with zeros
with tf.Session() as sess:
    sess.run(W.initializer)
    print("shape of w  ", w.shape)


a  1
b  0.5
shape of w   (784, 10)

In [9]:
# to evaluate a variable we use .eval() method
x = tf.Variable(tf.truncated_normal([4,2]))

with tf.Session() as sess:
    sess.run(x.initializer)
    print("not evaluating correctly ", x) # wont print the value of x, rather it will print Tensor("Variable/read:0",shape =(700,10),dtype=float32)
    print("evaluating correctly ",x.eval()) # will print the value of variable x


not evaluating correctly  <tf.Variable 'Variable_5:0' shape=(4, 2) dtype=float32_ref>
evaluating correctly  [[-0.23452228  0.11243899]
 [-0.66670126 -1.20581996]
 [ 1.74655235 -0.82684875]
 [-0.04403993 -1.0582459 ]]

In [10]:
# assign values to variables-->the incorrect way:

w = tf.Variable(10)
w.assign(100)

with tf.Session() as sess:
    sess.run(w.initializer)
    print("should print new value 100, but printed old value ",w.eval()) #prints 10 not 100 why!!!!! 
    
    # w.assign(100) doesn't assign the value 100 to w, but instead it creates an assign op to do that. 
    # we have to run this assign op in session

    # assign values to variables-->the correct way:

    w = tf.Variable(10)
    assign_op = w.assign(100)
    sess.run(assign_op)
    #dont need a sess.run(w.initilizer) here, assign() will do that for us.
    #infact, initilizer op is the assign op which assigns the initial value to variable.
    print("new value ", w.eval()) # prints 100


should print new value 100, but printed old value  10
new value  100

In [11]:
# multiple sessions maintain separate values 
w = tf.Variable(10)

sess1 = tf.Session()
sess2 = tf.Session()

sess1.run(w.initializer)
sess2.run(w.initializer)

print(sess1.run(w.assign_add(10))) # prints 20
print(sess2.run(w.assign_sub(2))) # prints 8

sess1.close()
sess2.close()


20
8

In [12]:
# declare a variable that depends on other variable
w = tf.Variable(tf.truncated_normal([700,10]))
u = tf.Variable(w*2)

#to intialize
u = tf.Variable(w.initialized_value() * 2)

In [13]:
# tf.InteractiveSession()
# interactive session makes itself default one. Then we can call eval/run without calling like sess.eval
sess_int = tf.InteractiveSession()
a = tf.constant(5)
b = tf.constant(6)
c = a * b

print(c.eval()) # we can use this without passing sess

sess_int.close() #close the interactive session

# tf.InteractiveSession().close() closes the interactive session.
# tf.get_default_session() returns the default session for the current thread.
# The returned session will be inner most session on which "as Session" or Session.as_default() context has been entered.


30

Handle dependencies within a graph suppose our graph has 5 ops a,b,c,d,e we can handle dependencies by giving the order of execution

with g.control_dependencies([a,b,c,d,e]):
   #d and e will only run after a,b,c have executed
    d = ....
    a = .....
    e = .....

To print graph definition

print(tf.get_default_graph().as_graph_def())

sample example on variables:


In [1]:
#import
import tensorflow as tf

a = tf.Variable(2, name='scaler')

a_times_two = a.assign(a*2)

init = tf.global_variables_initializer()

with tf.Session() as ses:
    ses.run(init)
    print(ses.run(a_times_two)) # prints 4
    print(ses.run(a_times_two)) # prints 8
    print(ses.run(a_times_two)) # prints 16
    print(ses.run(a_times_two - a.assign_sub(32))) # prints 0


4
8
16
0