This notebook implements the Slack outgoing webhook API to parse certain messages as code to eval on a kernel. This notebook also interfaces with the Slack incoming webhook API to send text and image output from a kernel back to Slack.
To use this example notebook, it must be running in an environment with one additional port open for the HTTP server defined herein to receive webhook calls from Slack. The default is to listen for HTTP requests on port 9001 for demo purposes.
SLACK_URL
global variable below. This notebook will forward select output from the kernel associated with the discussion notebook to that URL for posting in your configured channel.SLACK_TOKEN
global variable below. This notebook will only accept messages bearing that token to avoid spoofing.
In [ ]:
import json
import base64
import os
import hashlib
from pprint import pprint # for debug
from jupyter_client.ioloop import IOLoopKernelManager
from tornado.httpclient import AsyncHTTPClient, HTTPClient
We'll set some hard coded values up front. These should really get read from the environment but I'm lazy at the moment.
In [ ]:
SLACK_URL = 'https://hooks.slack.com/services/XXXXXX/XXXXXX/XXXXXXX'
SLACK_TOKEN = 'XXXXXXXXX'
Try connecting to an existing kernel connection file if one exists specifically for the defrag demo. If it's not there, Start kernel separate from this one. Keeps user code out of the namespace of this service. Avoids hard-to-reason-about async request handling all within a single kernel.
In [ ]:
if os.path.exists('/tmp/defrag_demo'):
km = IOLoopKernelManager(connection_file='/tmp/defrag_demo')
km.load_connection_file()
print('connecting to existing kernel')
else:
km = IOLoopKernelManager()
km.start_kernel()
Slack doesn't accept base64 encoded blobs back as images. We can only send back a real URL. We'll make a local directory where we'll dump any plots generated by the kernel to disk. Later, we'll serve them up through the same web server that we'll use to handle the Slack outgoing webhook calls.
In [ ]:
PLOT_DIR = '/home/jovyan/plots'
In [ ]:
!mkdir -p $PLOT_DIR
In [ ]:
def b64_to_file(b64_str, ext):
'''Dump a base64 encoded string to disk as a binary file with the given extension.'''
# decode base64 image and write to disk under a unique ID
img = base64.decodebytes(b64_str.encode('utf-8'))
# hash to filename
name = hashlib.sha1(img).hexdigest()
with open(os.path.join(PLOT_DIR, name+'.'+ext), 'wb') as f:
f.write(img)
return name
Connect to the iopub
socket to receive kernel output.
In [ ]:
if 'iopub' in locals():
iopub.close()
iopub = km.connect_iopub()
Define functions to handle message types of interest. The generic on_reply
below dispatches to these.
In [ ]:
def on_stream(content):
'''Handles stdout, stderr.'''
return dict(text=content['text'])
In [ ]:
def on_display_data(content):
'''Handles rich output.'''
data = content['data']
response = {}
# prefer images
attachments = []
for key in data.keys():
if key.startswith('image'):
_, ext = key.split('/')
name = b64_to_file(data[key], ext)
# point to plot on the web
# TODO: don't hard code the server URL
attachments.append({
"fallback": "Oh noes! The plot didn't render!",
'image_url': 'http://parente.cloudet.xyz:9001/static/{}.png'.format(name)
})
if len(attachments):
response['attachments'] = attachments
# fallback on text
if 'text/plain' in data:
response['text'] = data['text/plain']
return response if len(response) else None
The doc says these two types of messages are equivalent. So just alias the function.
In [ ]:
on_execute_result = on_display_data
Build a HTTP client to use to push messages back to Slack.
In [ ]:
http_client = AsyncHTTPClient()
Process all kernel replies as they come in. Use the kernel manager methods for converting ZeroMQ stream byte strings to nice Python dictionaries.
In [ ]:
def on_reply(stream, msg_list):
# process raw messages
idents, msg_list = km.session.feed_identities(msg_list)
msg = km.session.deserialize(msg_list)
# get delegate based on message type
func = globals().get('on_'+msg['msg_type'])
if func is not None:
# get an optional response
response = func(msg['content'])
if response:
# dump the response as JSON to Slack
http_client.fetch(SLACK_URL, method='POST',
body=json.dumps(response),
headers={'Content-Type' : 'application/json'})
Hook the on_reply
to the iopub
stream.
In [ ]:
iopub.on_recv_stream(on_reply)
Create a client that can be used to execute code on the kernel.
In [ ]:
kc = km.client()
In [ ]:
import tornado.web
import tornado.httpserver
import json
Define a simple HTTP handler for Slack POSTs. Support a GET
for liveliness checks too.
In [ ]:
class IncomingHandler(tornado.web.RequestHandler):
def get(self):
self.finish('{"status": "ok"}')
def post(self):
token = self.get_body_argument('token')
if token != SLACK_TOKEN:
return self.send_error(401)
# get code to run
code = self.get_body_argument('text')
# remove command prefix, up to first space
code = code[code.find(' ')+1:].strip()
# execute the code in the other kernel
kc.execute(code)
# return nothing for now
self.finish()
Map the handler and start listening.
In [ ]:
application = tornado.web.Application([
(r"/", IncomingHandler)
], static_path=PLOT_DIR)
In [ ]:
if 'server' in locals():
server.stop()
server = tornado.httpserver.HTTPServer(application)
server.listen(9001, '0.0.0.0')
Say hi.
In [ ]:
kc.execute('print("Hi, all")')