Deploy and Serve Model with TensorFlow Serving

Open a Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Run the Following Command in Terminal to Start Tensorflow Serving

serve 9000 linear /root/models/linear/cpu/ true

The params are as follows:

  • 1: port number (int)

  • 2: model_name (anything)

  • 3: /path/to/model (base path above all version sub-directories)

  • 4: request batching (true|false)

Open a 2nd Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Run the Following Command in Terminal to Start HTTP-GRPC Proxy

http_grpc_proxy 9004 9000

The params are as follows:

  • 1: port for this proxy to listen on
  • 2: port of Tensorflow Serving

Open a 3rd Terminal through Jupyter Notebook

(Menu Bar -> Terminal -> New Terminal)

Run the Following Command in Terminal to Start HTTP-GRPC Proxy

predict 9004 1.5

The params are as follows:

  • 1: port for http-grpc proxy
  • 2: x_observed

In [ ]: