Predict with Model

View Config


In [ ]:
%%bash 

pio init-model \
  --model-server-url http://prediction-spark.community.pipeline.io/ \
  --model-type spark \
  --model-namespace default \
  --model-name spark_airbnb \
  --model-version v1 \
  --model-path .

Predict with Model (CLI)


In [ ]:
%%bash

pio predict \
  --model-test-request-path ./data/test_request.json

Predict with Model under Mini-Load (CLI)

This is a mini load test to provide instant feedback on relative performance.


In [ ]:
%%bash

pio predict_many \
  --model-test-request-path ./data/test_request.json \
  --num-iterations 5

Model Dashboards


In [ ]:
%%html

<iframe width=800 height=600 src="http://hystrix.community.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Model%20Servers%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine.community.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D"></iframe>

Predict with Model (REST)


In [ ]:
import requests

model_type = 'spark'
model_namespace = 'default'
model_name = 'spark_airbnb'
model_version = 'v1'

deploy_url = 'http://prediction-%s.community.pipeline.io/api/v1/model/predict/%s/%s/%s/%s' % (model_type, model_type, model_namespace, model_name, model_version)

with open('./data/test_request.json', 'rb') as fh:
    model_input_binary = fh.read()

response = requests.post(url=deploy_url,
                         data=model_input_binary,
                         timeout=30)
        
print("Success!\n\n%s" % response.text)

In [ ]: