As a final lecture, we'll go over how to extend the reach of your Python code beyond the confines of the script itself and interact with the computer and other programs. Additionally, we'll look at ways of speeding up your Python code. By the end of this lecture, you should be able to:
subprocess
to invoke arbitrary external programsPython has lots of tools and packages to help you perform whatever analysis or function you want to do.
But sometimes you need to integrate with programs that don't have a Python interface.
Or maybe you just love the command prompt that much.
Especially in computational biology, there are frequent examples of needing to interface with programs outside of Python.
Python has a versatile subprocess
module for calling and interacting with other programs.
However, first the venerable system
command:
In [1]:
import os
os.system("curl www.cnn.com -o cnn.html")
Out[1]:
For simple commands, this is great. But where it quickly wears out its welcome is how it handles what comes back from the commands: the return value of the system command is the exit code, not what is printed to screen.
In [2]:
f = open('cnn.html')
len(f.read())
Out[2]:
What exit code indicates success?
The subprocess
module replaces the following modules (so don't use them):
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Think of subprocess
as a more powerful version of all these.
The most basic function in the subprocess
module is run
.
The first and only required argument is a list: it's the command-line command and all its arguments.
Remember commands like ls
? cd
? pwd
? Think of these commands as functions--they can also have arguments.
subprocess.run
a list with one element: just the command you want to run.subprocess.run
a list with multiple elements, where the first element is the command to run, and the subsequent elements are the arguments to that command.Let's see some examples, shall we?
In [3]:
import subprocess
subprocess.run(["ls"])
Out[3]:
In [4]:
subprocess.run(["touch", "test.txt"])
Out[4]:
In [5]:
subprocess.run(["echo", "something", ">>", "test.txt"])
Out[5]:
In [6]:
subprocess.run(["cat", "test.txt"])
Out[6]:
If there's some kind of oddity with the command you're trying to run, you'll get a nonzero exit code back.
In [7]:
subprocess.run(['ls','file with spaces'])
Out[7]:
What's wrong here?
If the filename really has spaces, you need to "escape" the filename by using shell = True
:
In [8]:
subprocess.run(['ls', 'file with spaces'], shell = True)
Out[8]:
If you're trying to run ls
on three separate files: file
, with
, and spaces
, you need to separate them into distinct list elements:
In [9]:
subprocess.run(['ls', 'file', 'with', 'spaces'])
Out[9]:
This is all well and good--I can run commands and see whether or not they worked--but usually when you run external programs, it's to generate some kind of output that you'll then want Python to use.
How do we access this output?
First, we'll need to introduce a new concept to all programming languages: input and output streams.
Otherwise known as "standard input", "standard output", and "standard error", or in programming parlance:
stdin
- standard input is usually from the keyboardstdout
- standard output, usually from print()
statementsstderr
- standard error, usually from errorsstdin, stdout, and stderr specify the executed program's standard input, standard output and standard error file handles, respectively.
We have to redirect these streams within subprocess
so we can see them from inside Python.
In [10]:
f = open('dump','w')
subprocess.run('ls', stdout = f)
f = open('dump','r') #this would be a very inefficient way to get the stdout of a program
print(f.readlines())
In [11]:
f = open('dump','w')
subprocess.run(['ls','nonexistantfile'], stdout = f, stderr = subprocess.STDOUT) #you can redirect stderr to stdout
print(open('dump').read())
Yes, that's the error message from the command line. Rather than showing up on the command line, we've captured stderr
to be the stdout
of the Python subprocess, and then redirected the stdout
to the file. Hence, the error message is in the file!
What can the targets of the stdout
and stderr
arguments be? Valid targets, as we've seen, include
open()
stdin
/stdout
/stderr
subprocess.PIPE
, which enables communication directly between your script and the program (BE CAREFUL WITH THIS)All the previous functions are just convenience wrappers around the Popen
object.
In addition to accepting the command and its arguments as a list, and the stdout
and stderr
optional arguments, Popen
includes the cwd
argument, which sets the working directory of the process, or defaults to the current working directory of the Python script.
In [12]:
proc = subprocess.Popen(["ls"], stdout = subprocess.PIPE)
print(proc.stdout.readlines())
In [13]:
proc = subprocess.Popen(['ls'], stdout = subprocess.PIPE, cwd = "/Users/squinn")
print(proc.stdout.readlines())
So what is this mysterious PIPE
attribute?
"Pipes" are a common operating system term for avenues of communication between different programs. In this case, a pipe is established between your Python program and whatever program you're running through subprocess
.
If stdout
/stdin
/stderr
is set to subprocess.PIPE
, then that input/output stream of the external program is accessible through a file object in the returned object.
It's a regular ol' file object, so you have access to all the Python file object functions you know and love:
In [14]:
proc = subprocess.Popen(['ls'], stdout = subprocess.PIPE)
In [15]:
print(proc.stdout.readline())
In [16]:
print(proc.stdout.readline())
In [17]:
for elem in proc.stdout.readlines():
print(elem)
Popen.poll()
- check to see if process has terminatedPopen.wait()
- wait for process to terminate (basically, ask your Python program to hang until the command is finished)Popen.terminate()
- terminate the process (ask nicely)Popen.kill()
- kill the process with extreme prejudice
In [18]:
proc = subprocess.Popen('cat', stdin = subprocess.PIPE, stdout = subprocess.PIPE)
We've created two pipes between our Python program and the cat
command--an input pipe to stdin
, and an output pipe from stdout
.
In [19]:
proc.stdin.write(bytes("Hello", encoding = 'utf-8'))
proc.stdin.close()
Here, we've written some raw bytes to the input--to the cat
program, this looks like it's coming from the keyboard!
In [20]:
print(proc.stdout.read())
Now we're reading the output of the cat
program!
What would happen if in the previous code we omitted the proc.stdin.close()
call?
Managing simultaneous input and output is tricky and can easily lead to deadlocks.
For example, your script may be blocked waiting for output from the process which is blocked waiting for input.
There are several parallel programming models enabled by a variety of hardware (multicore, cloud computing, supercomputers, GPU).
A thread of execution is the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler.
A process is an instance of a computer program.
The long and short of threads versus processes in Python is...
...always use processes.
Blah blah blah Global Interpreter Lock, blah blah blah only 1 thread allowed per Python process, blah blah blah just use multiprocessing.
Or multiprocessing.
This is the concept of parallel programming, or having your program do multiple things at the very same time.
With the rising popularity of multi-core computers, most computers these days can easily handle at least 4 parallel processes at once. Why not take advantage of that?
However, writing correct, high performance parallel code can be difficult: look no further than the Dining Philosophers problem.
...but in some cases, it's trivial.
A problem is embarassingly parallel if it can be easily separated into independent subtasks, each of which does a substantial amount of computation.
Fortunately, this happens a lot!
In cases like these, using Pools will get you a significant speedup (by however many cores you have).
multiprocessing
supports the concept of a pool of workers. You initialize with the number of processes you want to run in parallel (the default is the number of CPUs on your system) and they are available for doing parallel work:
Then (and this is the somewhat-tricky part), you call map()
and pass in two arguments:
Clear as mud? Let's see an example.
In [21]:
import multiprocessing
In [22]:
def f(x):
return x ** 2
pool = multiprocessing.Pool(processes = 4)
numbers_to_evaluate = range(20)
print(pool.map(f, numbers_to_evaluate))
While 90% of the work you'll likely do is embarrassingly parallel, some multiprocessing can't be done just with Pools.
Or, perhaps, you'll need to be able to communicate intermediate results between processes.
We can do this through queues and pipes.
multiprocessing.Queue
provides a simple first-in-first-out messaging queue between Python processes.
put
: put an element on the queue. This will block if the queue has filled upget
: get an element from the queue. This will block if the queue is empty. Queues are great if all you want to do is basically "report" on the progress of a process. The process put
s in updates, e.g. "20% finished", and the main Python script get
s these updates and prints them out to you.
A pipe is a communication channel between processes that can send and receive messages.
Pipe()
return a tuple of Connection
objects representing the ends of the pipe. Each connection object has the following methods:
send
: sends data to other end of the piperecv
: waits for data from other end of the pipe (unless pipe closed, then EOFError
)close
: close the pipeUnlike queues, pipes can support two-way communication between processes.
In [23]:
import multiprocessing
def chatty(conn): #this takes a Connection object representing one end of a pipe
msg = conn.recv()
conn.send("you sent me '" + msg + "'")
In [24]:
# Create the two ends of the pipe.
(c1,c2) = multiprocessing.Pipe()
In [25]:
# Spin off the process that runs the "Chatty" function.
p1 = multiprocessing.Process(target = chatty, args = (c2, ))
p1.start()
In [26]:
# Send a message to the process, and receive its response.
c1.send("Hello!")
result = c1.recv()
p1.join()
print(result)
joblib
is a wonderful package that uses multiprocessing
on the backend, but simplifies things greatly by removing a lot of boilerplate.
The most likely source of duplicated effort is in loops.
In [27]:
import numpy as np
def identity(value):
return np.sqrt(value ** 2)
In [28]:
# Now do some computation.
array = range(100000)
retval = []
for i in array:
retval.append(identity(i))
An important observation: no specific value of the array depends on any other. This means it doesn't matter the order in which these computations are performed. Plus it takes forever to run this on 100,000 numbers, one after another.
So why not perform them at the same time?
With multiprocessing
, we had to set up a Pool
and a map
. Not with joblib
:
In [29]:
from joblib import Parallel, delayed
retval = Parallel(n_jobs = 8, verbose = 1)(delayed(identity)(i) for i in array)
This is a bit tricky at first, but I promise it's more straightforward than multiprocessing
.
Let's take the code bit by bit.
retval = Parallel(n_jobs = 8, verbose = 1)(delayed(identity)(i) for i in array)
Parallel(n_jobs = 8, verbose = 1)
: This sets up the parallel computation, specifying we want to use 8 separate processes. The verbose
is just a logging argument--the higher the number, the more debugging output it spits out.delayed(identity)
: This is a little syntax trick by joblib
, but basically: whatever function you want to run in parallel, you pass in to the delayed()
function.(i)
: This is the argument you want to pass to the function you're running in parallel.for i in array
: and this is the loop through the data you want to process in parallel.All the same pieces are there as in multiprocessing
!
joblib
just streamlines the process by assuming loops are your primary source of repetition (a good assumption), so it bakes its machinery into the loop structure.
Anytime you do parameter scans, data point preprocessing, anything that is "embarrassingly parallel" or that would use multiprocessing.Pool
, use joblib
instead.
Just a quick look at one of the more cutting-edge Python packages: numba
Let's say you're trying to compute the Frobenius norm on an alignment matrix from a molecular dynamics simulation.
(that's just a fancy way of saying "element-wise Euclidean distance")
In [30]:
def frob(matrix):
rows = matrix.shape[0]
cols = matrix.shape[1]
frob_norm = 0.0
for i in range(rows):
for j in range(cols):
frob_norm += matrix[i, j] ** 2
return np.sqrt(frob_norm)
Let's see how it works.
In [31]:
import numpy as np
x1 = np.random.random((10, 10)) # A 10x10 random matrix
f1 = frob(x1)
print(f1)
Cool. Seems to have worked reasonably well. Out of sheer curiosity, how long did that take to run?
In [32]:
%timeit frob(x1)
Not bad. $10^{-6}$ seconds per run.
How well does it scale if the matrix is an order of magnitude larger?
In [33]:
x2 = np.random.random((100, 100)) # A 100x100 random matrix!
f2 = frob(x2)
print(f2)
In [34]:
%timeit frob(x2)
Yikes--an order of magnitude in data size, but two orders of magnitude in runtime increase.
Let's try one more data size increase. I have a bad feeling about this...
In [35]:
x3 = np.random.random((1000, 1000)) # Yikes
f3 = frob(x3)
print(f3)
In [36]:
%timeit frob(x3)
Another order of magnitude on the data, another two orders of magnitude on the runtime. Clearly not a good trend. Maybe a quadratic trend, in fact?
Point being, this code doesn't scale. At all.
Of course, the problem lies in the fact that you could be using NumPy array broadcasting. But let's say you didn't know about it.
Or, much more likely, it's a very small part of a much larger scientific program--complete with subprocesses and multiprocessing--and it's going to be tough to isolate a single part and optimize it.
"Just-in-time" compilation to the rescue!
In [37]:
from numba import jit
@jit
def frob2(matrix):
rows = matrix.shape[0]
cols = matrix.shape[1]
frob_norm = 0.0
for i in range(rows):
for j in range(cols):
frob_norm += matrix[i, j] ** 2
return np.sqrt(frob_norm)
I promise--other than the @jit
decorator on top of the function definition, the code for frob2
is identical to that of frob
.
Let's test this out on the third and largest test data!
In [38]:
%timeit frob(x3)
In [42]:
%timeit frob2(x3)
Woo! Got our three orders of magnitude back!
For the sake of completeness, let's see how this compares to a full NumPy array broadcasting version.
In [40]:
def frob3(matrix):
s = (matrix ** 2).sum()
return np.sqrt(s)
In [43]:
%timeit frob3(x3)
Yes, ladies and gentlemen: numba
-optimized Python code is faster than you can get from doing everything right using NumPy magic.
numba
works its magic by selectively compiling portions of Python code so they run really, really fast.
"Interpreted" languages versus "compiled" languages was one of the first concepts we discussed early on.
The key phrase is tend to: packages like numba
are blurring this demarcation.
subprocess
to call external programs (I use it to interface with our class' Slack chat through Python!).multiprocessing
or joblib
for magically fast speedups through parallelism.numba
's just-in-time compiler can add any speed.