Deploy to TensorFlow Model Server

Open a Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Start Tensorflow Serving in the Terminal

# TODO:  ## TODO: /root/models/linear/cpu/serving??
serve 9000 linear /root/models/linear/cpu/ false

The params are as follows:

  • 1: port number (int)

  • 2: model_name (anything)

  • 3: /path/to/model (base path above all version sub-directories)

  • 4: request batching (true|false)

Open a 2nd Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Start Http-Grpc Proxy in the 2nd Terminal

http_grpc_proxy 9004 9000

The params are as follows:

  • 1: port for this proxy to listen on
  • 2: port of Tensorflow Serving

Open a 3rd Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Run the Following Command in Terminal to Start HTTP-GRPC Proxy

predict 9004 1.5

The params are as follows:

  • 1: port for http-grpc proxy
  • 2: x_observed