A Python Tour of Data Science: High Performance Computing

Michaël Defferrard, PhD student, EPFL LTS2

Exercise: is Python slow ?

That is one of the most heard complain about Python. Because CPython, the Python reference implementation, interprets the language (i.e. it compiles Python code to intermediate bytecode which is then interpreted by a virtual machine), it is inherentably slower than compiled languages, especially for computation heavy tasks such as number crunching.

There are three ways around it that we'll explore in this exercise:

  1. Specialized libraries.
  2. Compile Python to machine code.
  3. Implement in a compiled language and call from Python.

In this exercise we'll compare many possible implementations of a function. Our goal is to compare the execution time of our implementations and get a sense of the many ways to write efficient Python code. We test seven implementations:

Along the exercise we'll use the function $$accuracy(\hat{y}, y) = \frac1n \sum_{i=0}^{i=n-1} 1(\hat{y}_i = y_i),$$ where $1(x)$ is the indicator function and $n$ is the number of samples. This function computes the accuracy, i.e. the percentage of correct predictions, of a classifier. A pure Python implementation is given below.


In [ ]:
def accuracy_python(y_pred, y_true):
    """Plain Python implementation."""
    num_correct = 0
    for y_pred_i, y_true_i in zip(y_pred, y_true):
        if y_pred_i == y_true_i:
            num_correct += 1
    return num_correct / len(y_true)

Below we test and measure the execution time of the above implementation. The %timeit function provided by IPython is a useful helper to measure the execution time of a line of Python code. As we'll see, the above implementation is very inefficient compared to what we can achieve.


In [ ]:
import numpy as np

c = 10  # Number of classes.
n = int(1e6)  # Number of samples.

y_true = np.random.randint(0, c, size=n)
y_pred = np.random.randint(0, c, size=n)
print('Expected accuracy: {}'.format(1/c))
print('Empirical accuracy: {}'.format(accuracy_python(y_pred, y_true)))

%timeit accuracy_python(y_pred, y_true)

1 Specialized libraries

Specialized libraries, which provide efficient compiled implementations of the heavy computations, is an easy way to solve the performance problem. That is for example NumPy, which uses efficient BLAS and LAPACK implementations as a backend. SciPy and scikit-learn fall in the same category.

Implement below the accuracy function using:

  1. Only functions provided by NumPy. The idea here is to vectorize the computation.
  2. The implementation of scikit-learn, our machine learning library.

Then test that it provides the correct result and measure it's execution time. How much faster are they compared to the pure Python implementation ?


In [ ]:
def accuracy_numpy(y_pred, y_true):
    """Numpy implementation."""
    # Your code here.
    return 0

def accuracy_sklearn(y_pred, y_true):
    """Scikit-learn implementation."""
    # Your code here.
    return 0

# assert np.allclose(accuracy_numpy(y_pred, y_true), accuracy_python(y_pred, y_true))
# assert np.allclose(accuracy_sklearn(y_pred, y_true), accuracy_python(y_pred, y_true))

# %timeit accuracy_numpy(y_pred, y_true)
# %timeit accuracy_sklearn(y_pred, y_true)

2 Compiled Python

The second option of choice, when the algorirthm does not exist in our favorite libraries and we have to implement it, it to implement in Python and compile it to machine code.

Below you'll compile Python with two frameworks.

  1. Numba is a just-in-time (JIT) compiler for Python, using the LLVM compiler infrastructure.
  2. Cython, which requires type information, transpiles Python to C then compiles the generated C code.

While these two approaches offer maximal compatibility with the CPython and NumPy ecosystems, another approach is to use another Python implementation such as PyPy, which features a just-in-time compiler and supports multiple back-ends (C, CLI, JVM). Alternatives are Jython, which runs Python on the Java platform, and IronPython / PythonNet for the .NET platform.


In [ ]:
from numba import jit

@jit
def accuracy_numba(y_pred, y_true):
    """Plain Python implementation, compiled by LLVM through Numba."""
    return 0

In [ ]:
%load_ext Cython

In [ ]:
%%cython
cimport numpy as np
cimport cython

def accuracy_cython(np.ndarray[long, ndim=1] y_pred, np.ndarray[long, ndim=1] y_true):
    """Python implementation with type information, transpiled to C by Cython."""
    return 0

Evaluate below the performance of those two implementations, while testing their correctness. How do they compare with plain Python and specialized libraries ?


In [ ]:
# Your code here.

3 Using C from Python

Here we'll explore our third option to make Python faster: implement in another language ! Below you'll

  1. implement the accuracy function in C,
  2. compile it, e.g. with the GNU compiler collection (GCC),
  3. call it from Python.

In [ ]:
%%file function.c

// The content of this cell is written to the "function.c" file in the current directory.

double accuracy(long* y_pred, long* y_true, int n) {
    // Your code here.
    return 0;
}

The below cell describe a shell script, which will be executed by IPython as if you typed the commands in your terminal. Those commands are compiling the above C program into a dynamic library with GCC. You can use any other compiler, text editor or IDE to produce the C library. Windows users, you may want to use Microsoft toolchain.


In [ ]:
%%script sh
FILE=function
gcc -c -O3 -Wall -std=c11 -pedantic -fPIC -o $FILE.o $FILE.c
gcc -o lib$FILE.so -shared $FILE.o
file lib$FILE.so

The below cell finally create a wrapper around our C library so that we can easily use it from Python.


In [ ]:
import ctypes

libfunction = np.ctypeslib.load_library('libfunction', './')
libfunction.accuracy.restype = ctypes.c_double
libfunction.accuracy.argtypes = [
    np.ctypeslib.ndpointer(dtype=np.int),
    np.ctypeslib.ndpointer(dtype=np.int),
    ctypes.c_int
]

def accuracy_c(y_pred, y_true):
    n = y_pred.size
    return libfunction.accuracy(y_pred, y_true, n)

Evaluate below the performance of your C implementation, and test its correctness. How does it compare with the others ?


In [ ]:
# Your code here.

4 Using Fortran from Python

Same idea as before, with Fortran ! Fortran is an imperative programming language developed in the 50s, especially suited to numeric computation and scientific computing. While you probably won't write new code in Fortran, you may have to interface with legacy code (especially in large and old corporations). Here we'll resort to the f2py utility provided by the Numpy project for the (almost automatic) generation of a wrapper.


In [ ]:
%%file function.f

! The content of this cell is written to the "function.f" file in the current directory.

      SUBROUTINE DACCURACY(YPRED, YTRUE, ACC, N)

CF2PY INTENT(OUT) :: ACC
CF2PY INTENT(HIDE) :: N
      INTEGER*4 YPRED(N)
      INTEGER*4 YTRUE(N)
      DOUBLE PRECISION ACC
      INTEGER N, NTOTAL, NCORRECT
    
      ! Your code here.

      ACC = 0 
      END

The below command compile the Fortran code and generate a Python wrapper.


In [ ]:
!f2py -c -m function function.f  # >> /dev/null

In [ ]:
import function
def accuracy_fortran(y_pred, y_true):
    return function.daccuracy(y_pred, y_true)

Evaluate below the performance of your Fortran implementation, and test its correctness. How does it compare with the others ?


In [ ]:
# Your code here.

5 Analysis

Plot a graph with n as the x-axis and the execution time of the various methods on the y-axis.


In [ ]:
# Your code here.

To reflect on what we've done, answer the following questions:

  1. Which implementation is the fastest ?
  2. Which solution was the easiest to work with ?
  3. Which solution would you use in which situation ?