Low Level TensorFlow, Part I: Basics


In [2]:
# import and check version
import tensorflow as tf
# tf can be really verbose
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)


1.13.0-rc0

In [3]:
# a small sanity check, does tf seem to work ok? 
hello = tf.constant('Hello TF!')
sess = tf.Session()
print(sess.run(hello))
sess.close()


b'Hello TF!'

First define a computational graph composed of operations and tensors

  • the main object you manipulate and pass around is the tf.Tensor.
  • a tf.Tensor object represents a partially defined computation that will eventually produce a value
  • tf.Tensor is a generalization of vectors and matrices to potentially higher dimensions
  • TensorFlow represents tf.Tensor as n-dimensional arrays of base datatypes
  • TensorFlow programs work by first building a graph of tf.Tensor object
  • detailing how each tensor is computed based on the other available tensors by running parts of this graph to achieve the desired results

https://www.tensorflow.org/guide/tensors


In [4]:
a = tf.constant(3.0, dtype=tf.float32) # special type of tensor
b = tf.constant(4.0) # also tf.float32 implicitly
total = a + b
print(a)
print(b)
print(total)


Tensor("Const_1:0", shape=(), dtype=float32)
Tensor("Const_2:0", shape=(), dtype=float32)
Tensor("add:0", shape=(), dtype=float32)

In [0]:
# types need to match
try:
  tf.constant(3.0, dtype=tf.float32) + tf.constant(4, dtype=tf.int32)
except TypeError as te:
  print(te)


Input 'y' of 'Add' Op has type int32 that does not match type float32 of argument 'x'.

In [0]:
# https://www.tensorflow.org/api_docs/python/tf/dtypes/cast
a = tf.constant(3, dtype=tf.int32)
b = tf.cast(tf.constant(4.0, dtype=tf.float32), tf.int32)
int_total = a + b
int_total


Out[0]:
<tf.Tensor 'add_2:0' shape=() dtype=int32>

Then use a session to execute the graph


In [5]:
# sessions need to be closed in order not to leak ressources, this makes sure close is called in any case
with tf.Session() as sess:
  print(sess.run(total))
#   print(sess.run(int_total))


7.0

Graphs can be executed on CPU, GPU, and even TPU


In [6]:
# let's see what compute devices we have available, hopefully a GPU 
# if you do not see it, switch on under Runtime->Change runtime type
with tf.Session() as sess:
  devices = sess.list_devices()
  for d in devices:
      print(d.name)


/job:localhost/replica:0/task:0/device:CPU:0
/job:localhost/replica:0/task:0/device:XLA_CPU:0
/job:localhost/replica:0/task:0/device:XLA_GPU:0
/job:localhost/replica:0/task:0/device:GPU:0

In [7]:
tf.test.gpu_device_name()


Out[7]:
'/device:GPU:0'

In [8]:
# GPU requires nvidia cuda
tf.test.is_built_with_cuda()


Out[8]:
True

In [9]:
with tf.device("/device:XLA_CPU:0"):
  with tf.Session() as sess:
    print(sess.run(total))


7.0

Feeding data to a graph


In [0]:
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y

In [0]:
with tf.Session() as sess:
  try:
    print(sess.run(z))
  except tf.errors.InvalidArgumentError as iae:
     print(iae.message)


You must feed a value for placeholder tensor 'Placeholder' with dtype float
	 [[node Placeholder (defined at <ipython-input-11-3b59dde2b9f0>:1)  = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
	 [[{{node add_3/_1}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7_add_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

In [0]:
with tf.Session() as sess:
    print(sess.run(z, feed_dict={x: 3.0, y: 4.5}))


7.5