Making predictions

Load Model

This notebook loads a model previously trained in 2_keras.ipynb or 3_eager.ipynb from earlier in the TensorFlow Basics workshop.

Note : The code in this notebook is quite Colab specific and won't work with Jupyter.


In [1]:
# In Jupyter, you would need to install TF 2 via !pip.
%tensorflow_version 2.x


TensorFlow 2.x selected.

In [0]:
## Load models from Drive (Colab only).
models_path = '/content/gdrive/My Drive/amld_data/models'
data_path = '/content/gdrive/My Drive/amld_data/zoo_img'

## Or load models from local machine.
# models_path = './amld_models'
# data_path = './amld_data'
## Or load models from GCS (Colab only).
# models_path = 'gs://amld-datasets/models'
# data_path = 'gs://amld-datasets/zoo_img_small'

In [3]:
if models_path.startswith('/content/gdrive/'):
  from google.colab import drive
  drive.mount('/content/gdrive')

if models_path.startswith('gs://'):
  # Keras doesn't read directly from GCS -> download.
  from google.colab import auth
  import os
  os.makedirs('./amld_models', exist_ok=True)
  auth.authenticate_user()
  !gsutil cp -r "$models_path"/\* ./amld_models
  models_path = './amld_models'

!ls -lh "$models_path"


Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly

Enter your authorization code:
··········
Mounted at /content/gdrive
ls: cannot access '/content/gdrive/My Drive/amld_data/models': No such file or directory

In [4]:
import json, os
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf

# Tested with TensorFlow 2.1.0
print('version={}, CUDA={}, GPU={}, TPU={}'.format(
    tf.__version__, tf.test.is_built_with_cuda(),
    # GPU attached? Note that you can "Runtime/Change runtime type..." in Colab.
    len(tf.config.list_physical_devices('GPU')) > 0,
    # TPU accessible? (only works on Colab)
    'COLAB_TPU_ADDR' in os.environ))


version=2.1.0, CUDA=True, GPU=True, TPU=False

In [5]:
# Load the label names from the dataset.
labels = [label.strip() for label in 
          tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('\n'.join(['%2d: %s' % (i, label) for i, label in enumerate(labels)]))


 0: camel
 1: crocodile
 2: dolphin
 3: elephant
 4: flamingo
 5: giraffe
 6: kangaroo
 7: lion
 8: monkey
 9: penguin
10: rhinoceros

In [6]:
# Load model from 2_keras.ipynb
model = tf.keras.models.load_model(os.path.join(models_path, 'linear.h5'))
model.summary()


Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 4096)              0         
_________________________________________________________________
dense (Dense)                (None, 11)                45067     
=================================================================
Total params: 45,067
Trainable params: 45,067
Non-trainable params: 0
_________________________________________________________________

Live Predictions


In [0]:
from google.colab import output
import IPython

def predict(img_64):
  """Get Predictions for provided image.
  
  Args:
    img_64: Raw image data (dtype int).

  Returns:
    A JSON object with the value for `result` being a text representation of the
    top predictions.
  """
  # Reshape image into batch with single image (extra dimension "1").
  preds = model.predict(np.array(img_64, float).reshape([1, 64, 64]))
  # Get top three predictions (reverse argsort).
  top3 = (-preds[0]).argsort()[:3]
  # Return both probability and prediction label name.
  result = '\n'.join(['%.3f: %s' % (preds[0, i], labels[i]) for i in top3])
  return IPython.display.JSON(dict(result=result))

output.register_callback('amld.predict', predict)

In [8]:
%%html
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<canvas width="256" height="256" id="canvas" style="border:1px solid black"></canvas><br />
<button id="clear">clear</button><br />
<pre id="output"></pre>
<script>
  let upscaleFactor = 4, halfPenSize = 2
  let canvas = document.getElementById('canvas')
  let output = document.getElementById('output')
  let ctx = canvas.getContext('2d')
  let img_64 = new Uint8Array(64*64)
  let dragging = false
  let timeout
  let predict = () => {
    google.colab.kernel.invokeFunction('amld.predict', [Array.from(img_64)], {}).then(
        obj => output.textContent = obj.data['application/json'].result)
  }
  const getPos = e => {
    let x = e.offsetX, y = e.offsetY
    if (e.touches) {
      const rect = canvas.getBoundingClientRect()
      x = e.touches[0].clientX - rect.left
      y = e.touches[0].clientY - rect.left
    }
    return {
      x: Math.floor((x - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
      y: Math.floor((y - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
    }
  }
  const handler = e => {
    const { x, y } = getPos(e)
    ctx.fillStyle = 'black'
    ctx.fillRect(x*upscaleFactor, y*upscaleFactor,
                 2*halfPenSize*upscaleFactor, 2*halfPenSize*upscaleFactor)
    for (let yy = y - halfPenSize; yy < y + halfPenSize; yy++)
      for (let xx = x - halfPenSize; xx < x + halfPenSize; xx++)
        img_64[64*Math.min(63, Math.max(0, yy)) + Math.min(63, Math.max(0, xx))] = 1
    clearTimeout(timeout)
    timeout = setTimeout(predict, 500)
  }
  canvas.addEventListener('touchstart', e => {dragging=true; handler(e)})
  canvas.addEventListener('touchmove', e => {e.preventDefault(); dragging && handler(e)})
  canvas.addEventListener('touchend', () => dragging=false)
  canvas.addEventListener('mousedown', e => {dragging=true; handler(e)})
  canvas.addEventListener('mousemove', e => {dragging && handler(e)})
  canvas.addEventListener('mouseup', () => dragging=false)
  canvas.addEventListener('mouseleave', () => dragging=false)
  document.getElementById('clear').addEventListener('click', () => {
    ctx.fillStyle = 'white'
    ctx.fillRect(0, 0, 64*upscaleFactor, 64*upscaleFactor)
    output.textContent = ''
    img_64 = new Uint8Array(64*64)
  })
</script>







In [9]:
# YOUR ACTION REQUIRED:
# Load another model from 2_keras.ipynb and observe:
# - Do you get better/worse predictions?
# - Do you feel a difference in latency?
# - Can you figure out by how the model "thinks" by providing similar images
#   that yield different predictions, or different images that yield the same
#   picture?
#--snip
model = tf.keras.models.load_model(os.path.join(models_path, 'conv.h5'))
model.summary()


Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
reshape (Reshape)            (None, 64, 64, 1)         0         
_________________________________________________________________
conv2d (Conv2D)              (None, 64, 64, 32)        3232      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 64, 64, 32)        102432    
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 64)        51264     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               262400    
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 11)                2827      
=================================================================
Total params: 422,155
Trainable params: 422,155
Non-trainable params: 0
_________________________________________________________________

TensorFlow.js

Read about basic concepts in TensorFlow.js: https://js.tensorflow.org/tutorials/core-concepts.html

If you find the Colab %%html way cumbersome to explore the JS API, then have a try codepen by clicking on the "Try TensorFlow.js" button on https://js.tensorflow.org/

Basics


In [10]:
# Getting the data of a tensor in TensorFlow.js: Use the async .data() method
# to show the output in the "output" element.
# See output in javascript console (e.g. Chrome developer tools).
# For convenience, you can also use the following Codepen:
# https://codepen.io/amld-tensorflow-basics/pen/OJPagyN
%%html
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
<pre id="output"></pre>
<script>
  let output = document.getElementById('output')
  let t = tf.tensor([1, 2, 3])
  output.textContent = t
  // YOUR ACTION REQUIRED:
  // Use "t.data()" to append the tensor's data values to "output.textContent".
  //--snip
  t.data().then(t_data => t_data.forEach(
    (value, idx) => output.textContent += `\n${idx}: ${value}`
  ))