The IPython Notebook and other interactive tools are great for prototyping code and exploring data, but sooner or later we will want to use our program in a pipeline or run it in a shell script to process thousands of data files. In order to do that, we need to make our programs work like other Unix command-line tools. For example, we may want a program that reads a data set and prints the average inflammation per patient:
$ python readings.py --mean inflammation-01.csv
5.45
5.425
6.1
...
6.4
7.05
5.9
but we might also want to look at the minimum of the first four lines
$ head -4 inflammation-01.csv | python readings.py --min
or the maximum inflammations in several files one after another:
$ python readings.py --max inflammation-*.csv
Our overall requirements are:
--min
, --mean
, or --max
flag to determine what statistic to print.To make this work, we need to know how to handle command-line arguments in a program, and how to get at standard input. We'll tackle these questions in turn below.
Using the text editor of your choice, save the following in a text file:
In [2]:
!cat sys-version.py
The first line imports a library called sys
,
which is short for "system".
It defines values such as sys.version
,
which describes which version of Python we are running.
We can run this script from within the IPython Notebook like this:
In [3]:
%run sys-version.py
or like this:
In [4]:
!ipython sys-version.py
The first method, %run
,
uses a special command in the IPython Notebook to run a program in a .py
file.
The second method is more general:
the exclamation mark !
tells the Notebook to run a shell command,
and it just so happens that the command we run is ipython
with the name of the script.
Here's another script that does something more interesting:
In [5]:
!cat argv-list.py
The strange name argv
stands for "argument values".
Whenever Python runs a program,
it takes all of the values given on the command line
and puts them in the list sys.argv
so that the program can determine what they were.
If we run this program with no arguments:
In [6]:
!ipython argv-list.py
the only thing in the list is the full path to our script,
which is always sys.argv[0]
.
If we run it with a few arguments, however:
In [7]:
!ipython argv-list.py first second third
then Python adds each of those arguments to that magic list.
With this in hand,
let's build a version of readings.py
that always prints the per-patient mean of a single data file.
The first step is to write a function that outlines our implementation,
and a placeholder for the function that does the actual work.
By convention this function is usually called main
,
though we can call it whatever we want:
In [8]:
!cat readings-01.py
This function gets the name of the script from sys.argv[0]
,
because that's where it's always put,
and the name of the file to process from sys.argv[1]
.
Here's a simple test:
In [9]:
%run readings-01.py inflammation-01.csv
There is no output because we have defined a function,
but haven't actually called it.
Let's add a call to main
:
In [10]:
!cat readings-02.py
and run that:
In [11]:
%run readings-02.py inflammation-01.csv
The Right Way to Do It
If our programs can take complex parameters or multiple filenames, we shouldn't handle
sys.argv
directly. Instead, we should use Python'sargparse
library, which handles common cases in a systematic way, and also makes it easy for us to provide sensible error messages for our users.
Write a command-line program that does addition and subtraction:
python arith.py 1 + 2
3
python arith.py 3 - 4
-1
What goes wrong if you try to add multiplication using '*' to the program?
Using the glob
module introduced 03-loop.ipynb,
write a simple version of ls
that shows files in the current directory with a particular suffix:
python my_ls.py py
left.py
right.py
zero.py
The next step is to teach our program how to handle multiple files. Since 60 lines of output per file is a lot to page through, we'll start by creating three smaller files, each of which has three days of data for two patients:
In [12]:
!ls small-*.csv
In [13]:
!cat small-01.csv
In [14]:
%run readings-02.py small-01.csv
Using small data files as input also allows us to check our results more easily: here, for example, we can see that our program is calculating the mean correctly for each line, whereas we were really taking it on faith before. This is yet another rule of programming: "test the simple things first".
We want our program to process each file separately,
so we need a looop that executes once for each filename.
If we specify the files on the command line,
the filenames will be in sys.argv
,
but we need to be careful:
sys.argv[0]
will always be the name of our script,
rather than the name of a file.
We also need to handle an unknown number of filenames,
since our program could be run for any number of files.
The solution to both problems is to loop over the contents of sys.argv[1:]
.
The '1' tells Python to start the slice at location 1,
so the program's name isn't included;
since we've left off the upper bound,
the slice runs to the end of the list,
and includes all the filenames.
Here's our changed program:
In [15]:
!cat readings-03.py
and here it is in action:
In [16]:
%run readings-03.py small-01.csv small-02.csv
Note:
at this point,
we have created three versions of our script called readings-01.py
,
readings-02.py
, and readings-03.py
.
We wouldn't do this in real life:
instead,
we would have one file called readings.py
that we committed to version control
every time we got an enhancement working.
For teaching,
though,
we need all the successive versions side by side.
The next step is to teach our program to pay attention to the --min
, --mean
, and --max
flags.
These always appear before the names of the files,
so we could just do this:
In [17]:
!cat readings-04.py
This works:
In [18]:
%run readings-04.py --max small-01.csv
but there are seveal things wrong with it:
main
is too large to read comfortably.
If action
isn't one of the three recognized flags,
the program loads each file but does nothing with it
(because none of the branches in the conditional match).
Silent failures like this
are always hard to debug.
This version pulls the processing of each file out of the loop into a function of its own.
It also checks that action
is one of the allowed flags
before doing any processing,
so that the program fails fast:
In [19]:
!cat readings-05.py
This is four lines longer than its predecessor, but broken into more digestible chunks of 8 and 12 lines.
Python has a module named argparse that helps handle complex command-line flags. We will not cover this module in this lesson but you can go to Tshepang Lekhonkhobe's Argparse tutorial that is part of Python's Official Documentation.
Rewrite this program so that it uses -n
, -m
, and -x
instead of --min
, --mean
, and --max
respectively.
Is the code easier to read?
Is the program easier to understand?
Separately, modify the program so that if no parameters are given (i.e., no action is specified and no filenames are given), it prints a message explaining how it should be used.
Separately, modify the program so that if no action is given it displays the means of the data.
The next thing our program has to do is read data from standard input if no filenames are given so that we can put it in a pipeline, redirect input to it, and so on. Let's experiment in another script:
In [20]:
!cat count-stdin.py
This little program reads lines from a special "file" called sys.stdin
,
which is automatically connected to the program's standard input.
We don't have to open it—Python and the operating system
take care of that when the program starts up—
but we can do almost anything with it that we could do to a regular file.
Let's try running it as if it were a regular command-line program:
In [21]:
!ipython count-stdin.py < small-01.csv
What if we run it using %run
?
In [22]:
%run count-stdin.py < fractal_1.txt
As you can see,
%run
doesn't understand file redirection:
that's a shell thing.
A common mistake is to try to run something that reads from standard input like this:
!ipython count_stdin.py fractal_1.txt
i.e., to forget the <
character that redirect the file to standard input.
In this case,
there's nothing in standard input,
so the program waits at the start of the loop for someone to type something on the keyboard.
Since there's no way for us to do this,
our program is stuck,
and we have to halt it using the Interrupt
option from the Kernel
menu in the Notebook.
We now need to rewrite the program so that it loads data from sys.stdin
if no filenames are provided.
Luckily,
numpy.loadtxt
can handle either a filename or an open file as its first parameter,
so we don't actually need to change process
.
That leaves main
:
def main():
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
if len(filenames) == 0:
process(sys.stdin, action)
else:
for f in filenames:
process(f, action)
Let's try it out
(we'll see in a moment why we send the output through head
):
In [23]:
!ipython readings-06.py --mean < small-01.csv | head -10
Whoops:
why are we getting IPython's help rather than the line-by-line average of our data?
The answer is that IPython has a hard time telling
which command-line arguments are meant for it,
and which are meant for the program it's running.
To make our meaning clear,
we have to use --
(a double dash)
to separate the two:
In [24]:
!ipython readings-06.py -- --mean < small-01.csv
That's better. In fact, that's done: the program now does everything we set out to do.
sys
library connects a Python program to the system it is running on.sys.argv
contains the command-line arguments that a program was run with.sys.stdin
connects to a program's standard input.sys.stdout
connects to a program's standard output.