The Butterfly Algorithm

Danny Hermes (dhermes)

February 11, 2015

You can find the source of these slides on GitHub.


In [1]:
%matplotlib inline
from make_partition_plots import make_butterfly
make_butterfly()


Outline

  • Background for Butterfly
  • Motivation from DFT
  • Details of Algorithm and Speedup
  • Code and Comparison

Setting the Stage

$$\widehat{f}(\xi) = \mathcal{F}f(\xi) = \int_{\mathbb{R}} \text{exp}\left(- 2 \pi x \xi \sqrt{-1}\right) f(x) \, dx$$
$$\widehat{f}_k = \sum_{j = 0}^{N - 1} \text{exp}\left(- \frac{2 \pi k j \sqrt{-1}}{N}\right) f_j$$

In [2]:
from IPython import display

display.Audio(filename='resources/bluewhale.wav')


Out[2]:


In [3]:
import numpy as np


def load_whale():
    # http://www.mathworks.com/help/matlab/math/fast-fourier-transform-fft.html
    data = np.load('resources/bluewhale.npz')
    X = data['X']
    sampling_rate = int(data['rate'])

    blue_whale_begin = 24500 - 1
    blue_whale_end = 31000 - 1

    blue_whale_call = X[blue_whale_begin:blue_whale_end + 1]
    size_call = len(blue_whale_call)

    # Pad signal with zeros up to the next power of 2.
    N = int(2**np.ceil(np.log2(size_call)))

    blue_whale_call = np.hstack([
        blue_whale_call,
        np.zeros(N - len(blue_whale_call)),
    ])
    return sampling_rate, blue_whale_call

In [4]:
sampling_rate, blue_whale_call = load_whale()

N = len(blue_whale_call)
time_base = np.arange(N) * 10.0 / sampling_rate

In [5]:
from matplotlib import pyplot as plt


plt.plot(time_base, blue_whale_call)
plt.title('Blue Whale B Call')
plt.xlim((time_base[0], time_base[-1]))
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude')
plt.show()



In [6]:
import time


start = time.time()
dft_whale_call = np.fft.fft(blue_whale_call, n=N)
fft_duration = time.time() - start

message = r'\text{Duration with FFT optimized: } %2.10f' % (fft_duration,)
display.display(display.Math(message))


$$\text{Duration with FFT optimized: } 0.0009500980$$

In [7]:
re_sampled_time = np.arange(len(dft_whale_call)) * sampling_rate / (10.0 * N)
amplitude = np.abs(dft_whale_call) / N

plt.plot(re_sampled_time[:N/2], amplitude[:N/2])

plt.xlabel('Frequency (Hz)')
plt.ylabel('Amplitude')
plt.title('Component Frequencies')
plt.show()


From DFT to Butterflies

$$\widehat{f}_k = \sum_{j = 0}^{N - 1} K(t_k, s_j) \, f_j$$

In [8]:
from make_partition_plots import get_random_intervals
from make_partition_plots import naive_interaction


S_VALUES, T_VALUES = get_random_intervals(16)
naive_interaction(S_VALUES, T_VALUES, N=16)


$$\widehat{f}(t) = \sum_{s} K(t, s) \, D(s)$$

DFT as Butterfly

$$t_k = 2 \pi k, \quad s_j = \frac{j}{N}, \quad K(t, s) = e^{- t s \sqrt{-1}}$$

Naive gets Quadratic

We see that in the most naive implementation, the runtime is quadratic.


In [9]:
def compute_f_hat(f, t, s, kernel_func):
    f_hat = np.zeros(f.shape, dtype=np.complex128)
    for k in xrange(len(f)):
        # Vectorized update.
        f_hat[k] = np.sum(kernel_func(t[k], s) * f)

    return f_hat

In [10]:
# We expect a "slowdown factor" of N / log N
expected_quadratic = fft_duration * N / np.log2(N)

message = r'\text{Expected Quadratic Run-time: } %2.10f' % (expected_quadratic,)
display.display(display.Math(message))


$$\text{Expected Quadratic Run-time: } 0.5987079327$$

In [11]:
N_vals = np.arange(N, dtype=np.float64)
t = 2 * np.pi * N_vals
s = N_vals / N


def dft_kernel(t, s):
    return np.exp(- 1.0j * t * s)


start = time.time()
f_hat = compute_f_hat(blue_whale_call, t, s, dft_kernel)
naive_duration = time.time() - start

message = r'\text{Actual Naive Duration: } %2.10f' % (naive_duration,)
display.display(display.Math(message))


$$\text{Actual Naive Duration: } 7.0778009892$$

Is the naive implementation correct?


In [12]:
error = np.linalg.norm(f_hat - dft_whale_call, ord=2)

sig, exponent = str(error).split('e')
expr = r'\|e\|_2 = %s \cdot 10^{%s}' % (sig, exponent)
display.display(display.Math(expr))


$$\|e\|_2 = 3.65989629865 \cdot 10^{-10}$$

Doing Better than Quadratic


In [13]:
naive_interaction(S_VALUES, T_VALUES, N=16)



In [14]:
from make_partition_plots import binned_interaction
binned_interaction(S_VALUES, T_VALUES, L=2)


$$\widehat{f}(t) = \sum_{s} K(t, s) \, D(s) \approx \sum_{\widehat{s}} K' \left(t, \widehat{s}\right) D\left(\widehat{s}\right)$$
$$\mathbf{\text{DFT: }} s_j = \frac{j}{N} \in \left[0, 1\right), \quad 2^L \text{ segments} \Longrightarrow \sigma_{j} = \frac{2j + 1}{2^{L + 1}}$$

In [15]:
from make_partition_plots import make_1D_centers
make_1D_centers(L=2, s_values=S_VALUES)


$$K(t, s) = e^{- t s \sqrt{-1}}, \, ts = t \sigma + t (s - \sigma) \Longrightarrow K(t, s) = K(t, \sigma) K(t, s - \sigma)$$
$$\widehat{f}(t) = \sum_{\sigma} K(t, \sigma) \left[\sum_{s \in B(\sigma)} K(t, s - \sigma) D(s)\right]$$
$$\widehat{f}(t) = \sum_{\sigma} K(t, \sigma) \underbrace{\left[\sum_{s \in B(\sigma)} K(t, s - \sigma) D(s)\right]}_{D(\sigma)}$$
$$\widehat{f}(t) = \sum_{\sigma} K(t, \sigma) D(\sigma)$$

Operation Count

$$2^L \text{ boxes}, \quad 2^L = \mathcal{O}\left(N\right), \quad \left|B(\sigma)\right| = \mathcal{O}(1)$$

$$\begin{align*} \sum_{s \in B(\sigma)} K(t, s - \sigma) D(s) &\rightarrow \mathcal{O}(1) \\ \widehat{f}_k &\rightarrow \mathcal{O}(N) \\ \widehat{f} &\rightarrow \mathcal{O}\left(N^2\right). \end{align*}$$

But We Want Better Than Quadratic

$$K(t, s) \approx \sum_{\alpha = 0}^{M - 1} T_{\alpha}(t) S_{\alpha}(s)$$
$$\text{exp}\left(- t s \sqrt{-1}\right) \approx \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} t^{\alpha} s^{\alpha}$$
$$\widehat{f}(t) \approx \sum_{\sigma} K(t, \sigma) \sum_{s \in B(\sigma)} \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} t^{\alpha} \left(s - \sigma\right)^{\alpha} D(s)$$
$$K'(t, \sigma, \alpha) = K(t, \sigma) t^{\alpha}, \quad D(\sigma, \alpha) = \sum_{s \in B(\sigma)} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} \left(s - \sigma\right)^{\alpha} D(s)$$
$$\widehat{f}(t) = \sum_{\sigma, \alpha} K'(t, \sigma, \alpha) D(\sigma, \alpha)$$
$$K(t, s - \sigma) \approx \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} t^{\alpha} \left(s - \sigma\right)^{\alpha} \text{ when } \left|t\right| \gg 1$$
$$K(t - \tau, s - \sigma) \approx \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} (t - \tau)^{\alpha} \left(s - \sigma\right)^{\alpha} \text{ for } t \in B(\tau)$$
$$\left|(t - \tau)(s - \sigma)\right| \leq r_{\tau} r_{\sigma} \text{ uniformly bounded}$$

Partitioning the Inputs

  • $\mathbb{R}$: binary tree
  • $\mathbb{R}^2$ / $\mathbb{C}$: quadtree
  • $\mathbb{R}^k$: k-d tree

In [16]:
make_butterfly(L=4, level=0)



In [17]:
make_butterfly(L=4, level=1)



In [18]:
make_butterfly(L=4, level=2)



In [19]:
make_butterfly(L=4, level=3)



In [20]:
make_butterfly(L=4, level=4)


$$\left|(t - \tau)(s - \sigma)\right| \leq r_{\tau} r_{\sigma} \text{ uniformly bounded}$$
$$K(t - \tau, s - \sigma)$$
$$\widehat{f}(t) \approx \sum_{\sigma} K(t, \sigma) \sum_{s \in B(\sigma)} \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} t^{\alpha} \left(s - \sigma\right)^{\alpha} D(s) \text{ when } \left|t\right| \gg 1$$
$$\widehat{f}(t) \approx \sum_{\sigma} \cdots \sum_{s \in B(\sigma)} \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} (t - \tau)^{\alpha} \left(s - \sigma\right)^{\alpha} D(s)$$

In [21]:
import sympy


t, s, tau, sigma = sympy.symbols('t s tau sigma')
RHS = tau * sigma + tau * (s - sigma) + sigma * (t - tau) + (s - sigma) * (t - tau)
display.display(display.Math(sympy.latex(RHS)))


$$\sigma \tau + \sigma \left(t - \tau\right) + \tau \left(s - \sigma\right) + \left(s - \sigma\right) \left(t - \tau\right)$$

In [22]:
display.display(display.Math(sympy.latex(RHS.simplify())))


$$s t$$
$$K(t, s) = K(\tau, \sigma) K(t - \tau, \sigma) K(\tau, s - \sigma) K(t - \tau, s - \sigma)$$
$$K(t, s) \approx K(\tau, \sigma) K(t - \tau, \sigma) K(\tau, s - \sigma) \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} (t - \tau)^{\alpha} \left(s - \sigma\right)^{\alpha}$$
$$K'(t, \sigma, \alpha) = K(t - \tau, \sigma) (t - \tau)^{\alpha}$$
$$D(\tau, \sigma, \alpha) = \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} K(\tau, \sigma) \sum_{s \in B(\sigma)} K(\tau, s - \sigma) \left(s - \sigma\right)^{\alpha} D(s)$$
$$\widehat{f}(t) = \sum_{\sigma, \alpha} K'(t, \sigma, \alpha) D(\tau, \sigma, \alpha)$$

Are we there yet?

Computing $D\left(\tau, \sigma, \alpha\right)$

  • $M$ choices of $\alpha$
  • $2^{\ell}$ choices of $\tau$
  • $2^{L - \ell}$ choices of $\sigma$

In total: $M \cdot 2^L = \mathcal{O}\left(N\right)$

We start with $\ell = 0$ and consider what it might take to compute.


In [23]:
make_butterfly(L=4, level=0)


Counting Operations: $\ell = 0$

• $\left|\left\{B(\tau)\right\}_{\tau}\right| = 1$

• $\left|\left\{B(\sigma)\right\}_{\sigma}\right| = 2^L = \mathcal{O}(N)$

• $\left|\left\{D(\tau, \sigma, \alpha)\right\}_{\tau, \sigma, \alpha}\right| = \mathcal{O}(N)$

• $\widehat{f}(t) = \displaystyle \sum_{\sigma, \alpha} K'(t, \sigma, \alpha) D(\tau, \sigma, \alpha)$ takes $\mathcal{O}(N)$

Total: $\mathcal{O}\left(N^2\right)$ since $D(\tau, \sigma, \alpha) = \displaystyle \sum_{s \in B(\sigma)} \cdots$

Instead we consider the end where $\ell = L$.


In [24]:
make_butterfly(L=4, level=4)


Counting Operations: $\ell = L$

• $\left|\left\{B(\tau)\right\}_{\tau}\right| = 2^L = \mathcal{O}(N)$

• $\left|\left\{B(\sigma)\right\}_{\sigma}\right| = 1$

• $\widehat{f}(t) = \displaystyle \sum_{\sigma, \alpha} K'(t, \sigma, \alpha) D(\tau, \sigma, \alpha)$ takes $\mathcal{O}(1)$

• Now $D(\tau, \sigma, \alpha) = \displaystyle \sum_{s \in B(\sigma)} \cdots$ is $\mathcal{O}(N)$

Total: still $\mathcal{O}\left(N^2\right)$

Takeaway

$$\ell = 0 \Longrightarrow \sum_{s \in B(\sigma)} \text{ is cheap } \Longrightarrow D(\tau, \sigma, \alpha) \text{ is cheap}$$$$\ell = L \Longrightarrow \sum_{\sigma, \alpha} \text{ is cheap } \Longrightarrow \widehat{f}(t) \text{ is cheap}$$

Now What

$$\left\{D(\tau, \sigma, \alpha)\right\}_{\ell = 0} \rightarrow \cdots \rightarrow \left\{D(\tau, \sigma, \alpha)\right\}_{\ell = L}$$
$$D(\tau, \sigma, \alpha) = \frac{1}{\alpha!} \sum_{s \in B(\sigma)} K(\tau, s) \left(-\sqrt{-1}\right)^{\alpha} (s - \sigma)^{\alpha} D(s)$$
$$W = \frac{\max(s) - \min(s)}{2^{L - \ell}}$$
$$\left[\begin{matrix} s_0 \\ s_1 \\ \vdots \\ s_{N - 1} \end{matrix}\right] \mapsto \left[\begin{matrix} \left\lfloor\frac{s_0 - \min(s)}{W}\right\rfloor \\ \left\lfloor\frac{s_1 - \min(s)}{W}\right\rfloor \\ \vdots \\ \left\lfloor\frac{s_{N - 1} - \min(s)}{W}\right\rfloor \end{matrix}\right] = \left[\begin{matrix} b_0 \\ b_1 \\ \vdots \\ b_{N - 1} \end{matrix}\right] \mapsto \left[\begin{matrix} \sigma_0 + b_0 W \\ \sigma_0 + b_1 W \\ \vdots \\ \sigma_0 + b_{N - 1} W \end{matrix}\right]$$

In [25]:
def get_bins_and_deltas(vals, min_val, bin_width, num_bins):
    bin_indices = np.floor((vals - min_val) / bin_width)
    # max(vals) falls in the last bin
    bin_indices = np.minimum(bin_indices, num_bins - 1)

    bin_centers = min_val + (0.5 + bin_indices) * bin_width
    return bin_indices.astype(int), vals - bin_centers
$$D(\tau, \sigma, \alpha) = \frac{1}{\alpha!} \sum_{s \in B(\sigma)} K(\tau, s) \left(-\sqrt{-1}\right)^{\alpha} (s - \sigma)^{\alpha} D(s)$$
$$\left[\begin{matrix} s_0 \\ s_1 \\ \vdots \\ s_{N - 1} \end{matrix}\right] \mapsto \left[\begin{matrix} s_0 - \sigma(s_0) \\ s_1 - \sigma(s_1) \\ \vdots \\ s_{N - 1} - \sigma\left(s_{N - 1}\right) \end{matrix}\right] = \left[\begin{matrix} d_0 \\ d_1 \\ \vdots \\ d_{N - 1} \end{matrix}\right] \mapsto \left[\begin{matrix} d_0^0 & d_0^1 & \cdots & d_0^{M - 1} \\ d_1^0 & d_1^1 & \cdots & d_1^{M - 1} \\ \vdots & \vdots & \ddots & \vdots \\ d_{N - 1}^0 & d_{N - 1}^1 & \cdots & d_{N - 1}^{M - 1} \end{matrix}\right]$$

In [26]:
def create_initial_data(s_values, min_s, max_s, tau,
                        actual_data, num_bins, M):
    bin_width = (max_s - min_s) / float(num_bins)
    bin_indices, s_deltas = get_bins_and_deltas(s_values, min_s,
                                                bin_width, num_bins)

    sum_parts = np.zeros((len(s_values), M), dtype=np.complex128)
    sum_parts[:, 0] = dft_kernel(tau, s_values) * actual_data

    for alpha in xrange(1, M):
        sum_parts[:, alpha] = (sum_parts[:, alpha - 1] * s_deltas *
                               (-1.0j) / alpha)

    result = []
    curr_sigma = min_s + 0.5 * bin_width
    for bin_index in xrange(num_bins):
        sum_across_bin = np.sum(
            sum_parts[np.where(bin_indices == bin_index)[0], :], axis=0)
        result.append((tau, curr_sigma, sum_across_bin.reshape(M, 1)))
        curr_sigma += bin_width

    return result
$$\cdots \rightarrow \left\{D(\tau, \sigma, \alpha)\right\}_{\ell = L}$$
$$\widehat{f}(t) = \sum_{\sigma, \alpha} K(t - \tau, \sigma) \left(t - \tau\right)^{\alpha} D(\tau, \sigma, \alpha)$$
$$d_k = t_k - \tau(t_k)$$
$$\left[\begin{matrix} d_0^0 & \cdots & d_0^{M - 1} \\ \vdots & \ddots & \vdots \\ d_{N - 1}^0 & \cdots & d_{N - 1}^{M - 1} \end{matrix}\right] \otimes \left[\begin{matrix} D\left(\tau(t_0), \sigma, 0\right) & \cdots & D\left(\tau(t_0), \sigma, M - 1\right) \\ \vdots & \ddots & \vdots \\ D\left(\tau(t_{N - 1}), \sigma, 1\right) & \cdots & D\left(\tau(t_{N - 1}), \sigma, M - 1\right) \end{matrix}\right]$$

In [27]:
def compute_t_by_bins(t_values, min_t, max_t, coeff_vals, num_bins, M):
    bin_width = (max_t - min_t) / float(num_bins)
    bin_indices, t_deltas = get_bins_and_deltas(t_values, min_t,
                                                bin_width, num_bins)

    # We assume all sigma values are the same and do not check.
    sigma = coeff_vals[0][1]

    exponents = np.arange(M, dtype=np.float64)
    t_delta_powers = t_deltas[:, np.newaxis]**exponents[np.newaxis, :]

    # Use hstack since the vectors are 2D, then transpose.
    coefficients = np.hstack([triple[2] for triple in coeff_vals]).T

    t_matched_coeffs = coefficients[bin_indices, :]

    fhat_t = np.sum(t_delta_powers * t_matched_coeffs, axis=1,
                    dtype=np.complex128)
    fhat_t *= dft_kernel(t_deltas, sigma)

    return fhat_t

Converting Coefficients

High Level Approach

$$\mathbf{\text{Refine: }} B(\tau) = B(\tau_+') \cup B(\tau_+'')$$$$\mathbf{\text{Coarsen: }} B(\sigma_{-}) = B(\sigma) \cup B(\sigma')$$

High Level Approach

$$\mathbf{\text{Have: }} D(\tau, \sigma, \alpha) \text{ and } D(\tau, \sigma', \alpha)$$$$\mathbf{\text{Want: }} D(\tau_+', \sigma_{-}, \alpha) \text{ and } D(\tau_+'', \sigma_{-}, \alpha)$$

High Level Approach

$$\mathbf{\text{Intermediate: }} D(\tau_+', \sigma, \alpha) \text{ and } D(\tau_+'', \sigma, \alpha)$$

Refine Target

$$D(\tau, \sigma, \alpha) = \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} K(\tau, \sigma) \sum_{s \in B(\sigma)} K(\tau, s - \sigma) \left(s - \sigma\right)^{\alpha} D(s)$$
$$D(\tau_+, \sigma, \alpha) = \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} \sum_{s \in B(\sigma)} K(\tau_+, s) \left(s - \sigma\right)^{\alpha} D(s)$$
$$K(t, s) = K(\tau, \sigma) K(t - \tau, \sigma) K(\tau, s - \sigma) K(t - \tau, s - \sigma)$$
$$K(\tau_+, s) = K(\tau, \sigma) K(\tau_+ - \tau, \sigma) K(\tau, s - \sigma) K(\tau_+ - \tau, s - \sigma)$$
$$D(\tau_+, \sigma, \alpha) = \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} \sum_{s \in B(\sigma)} K(\tau_+, s) \left(s - \sigma\right)^{\alpha} D(s)$$
$$K(\tau_+, s) = K(\tau, \sigma) K(\tau_+ - \tau, \sigma) K(\tau, s - \sigma) K(\tau_+ - \tau, s - \sigma)$$
$$K(\tau_+ - \tau, s - \sigma) = \sum_{\beta \geq 0} \frac{\left(- \sqrt{-1}\right)^{\beta}}{\beta!} \left(\tau_+ - \tau\right)^{\beta} \left(s - \sigma\right)^{\beta}$$

... Wave Magic Algebra Wand ...

$$D(\tau_+, \sigma, \alpha) = K(\tau_+ - \tau, \sigma) \sum_{\beta \geq 0} \binom{\alpha + \beta}{\alpha} \left(\tau_+ - \tau\right)^{\beta} D(\tau, \sigma, \alpha + \beta)$$
$$D(\tau, \sigma, \alpha + \beta) \text{ not defined for all } \beta$$
$$D(\tau_+, \sigma, \alpha) \approx K(\tau_+ - \tau, \sigma) \sum_{\beta : \alpha + \beta < M} \binom{\alpha + \beta}{\alpha} \left(\tau_+ - \tau\right)^{\beta} D(\tau, \sigma, \alpha + \beta)$$
$$D(\tau_+, \sigma, \alpha) \approx K(\tau_+ - \tau, \sigma) \sum_{\beta : \alpha + \beta < M} \binom{\alpha + \beta}{\alpha} \left(\tau_+ - \tau\right)^{\beta} D(\tau, \sigma, \alpha + \beta)$$
$$0: \, \binom{0}{0} \Delta^{0} D(\tau, \sigma, 0) + \binom{1}{0} \Delta^{1} D(\tau, \sigma, 1) + \cdots \binom{M - 1}{0} \Delta^{M - 1} D(\tau, \sigma, M - 1)$$
$$1: \, \binom{1}{1} \Delta^{0} D(\tau, \sigma, 1) + \binom{2}{1} \Delta^{1} D(\tau, \sigma, 2) + \cdots \binom{M - 1}{1} \Delta^{M - 2} D(\tau, \sigma, M - 1)$$
$$D(\tau_{+}, \sigma, \, \cdot \,) = K(\Delta, \sigma) A_1(\Delta) D(\tau, \sigma, \, \cdot \,)$$
$$A_1(\Delta) = \left[\begin{matrix} \binom{0}{0} \Delta^{0} & \binom{1}{0} \Delta^{1} & \binom{2}{0} \Delta^{2} & \cdots & \binom{M - 1}{0} \Delta^{M - 1} \\ & \binom{1}{1} \Delta^{0} & \binom{2}{1} \Delta^{1} & \cdots & \binom{M - 1}{1} \Delta^{M - 2} \\ & & \binom{2}{2} \Delta^{0} & \cdots & \binom{M - 1}{2} \Delta^{M - 3} \\ & & & \ddots & \vdots \\ & & & & \binom{M - 1}{M - 1} \Delta^{0} \end{matrix}\right]$$

In [28]:
def A1(M, delta, eye_func=np.eye):
    result = eye_func(M)
    result[0, 1] = delta

    for col in xrange(2, M):
        prev_val = result[0, col - 1]

        # Pascal's triangle does not apply at ends. The zero term
        # already set on the diagonal.
        result[0, col] = delta * prev_val
        for row in xrange(1, col):
            curr_val = result[row, col - 1]
            result[row, col] = prev_val + delta * curr_val
            prev_val = curr_val

    return result

In [29]:
A1_val = A1(5, -1.0)
print A1_val


[[ 1. -1.  1. -1.  1.]
 [ 0.  1. -2.  3. -4.]
 [ 0.  0.  1. -3.  6.]
 [ 0.  0.  0.  1. -4.]
 [ 0.  0.  0.  0.  1.]]

In [30]:
A1_symb = A1(5, sympy.Symbol('Delta'), eye_func=sympy.eye)
display.display(display.Math(sympy.latex(A1_symb)))


$$\left[\begin{matrix}1 & \Delta & \Delta^{2} & \Delta^{3} & \Delta^{4}\\0 & 1 & 2 \Delta & 3 \Delta^{2} & 4 \Delta^{3}\\0 & 0 & 1 & 3 \Delta & 6 \Delta^{2}\\0 & 0 & 0 & 1 & 4 \Delta\\0 & 0 & 0 & 0 & 1\end{matrix}\right]$$

In [31]:
from make_partition_plots import show_tau_refine

show_tau_refine()


$$\Delta_T = \tau_{+}'' - \tau = \tau - \tau_{+}' = \frac{1}{4} \frac{\max(t) - \min(t)}{2^{\ell}} = \frac{1}{4} \frac{W_T}{2^{\ell}}$$

Remember This

$$\left[\begin{matrix} D(\tau_{+}', \sigma, \, \cdot \,) \\ D(\tau_{+}', \sigma', \, \cdot \,) \\ D(\tau_{+}'', \sigma, \, \cdot \,) \\ D(\tau_{+}'', \sigma', \, \cdot \,) \end{matrix}\right] = \left[\begin{matrix} K(-\Delta_T, \sigma) A_1(-\Delta_T) & 0 \\ 0 & K(-\Delta_T, \sigma') A_1(-\Delta_T) \\ K(\Delta_T, \sigma) A_1(\Delta_T) & 0 \\ 0 & K(\Delta_T, \sigma') A_1(\Delta_T) \end{matrix}\right] \left[\begin{matrix} D(\tau, \sigma, \, \cdot \,) \\ D(\tau, \sigma', \, \cdot \,) \end{matrix}\right]$$

In [32]:
def mult_diag(val, A, M, diag):
    first_row = max(0, -diag)
    last_row = min(M, M - diag)
    for row in xrange(first_row, last_row):
        A[row, row + diag] *= val


def A_update(A_val, scale_multiplier=0.5, upper_diags=True):
    M = A_val.shape[0]

    # If not `upper_diags` we want the lower diagonal.
    diag_mult = 1 if upper_diags else -1
    # We don't need to update the main diagonal since exponent=0.
    scale_factor = 1
    for diagonal in xrange(1, M):
        scale_factor *= scale_multiplier
        mult_diag(scale_factor, A_val, M, diagonal * diag_mult)

In [33]:
A1_negative = A1_val.copy()
A_update(A1_negative, scale_multiplier=-1.0)
print A1_negative


[[ 1.  1.  1.  1.  1.]
 [ 0.  1.  2.  3.  4.]
 [ 0.  0.  1.  3.  6.]
 [ 0.  0.  0.  1.  4.]
 [ 0.  0.  0.  0.  1.]]

In [34]:
all_ones = sympy.ones(4, 4)
A_update(all_ones, scale_multiplier=sympy.Symbol('x'), upper_diags=True)
A_update(all_ones, scale_multiplier=sympy.Symbol('y'), upper_diags=False)
display.display(display.Math(sympy.latex(all_ones)))


$$\left[\begin{matrix}1 & x & x^{2} & x^{3}\\y & 1 & x & x^{2}\\y^{2} & y & 1 & x\\y^{3} & y^{2} & y & 1\end{matrix}\right]$$

Coarsen Source

$$D(\tau_+, \sigma_{-}, \alpha) = \frac{\left(-\sqrt{-1}\right)^{\alpha}}{\alpha!} \sum_{s \in B(\sigma_{-})} K(\tau_+, s) \left(s - \sigma_{-}\right)^{\alpha} D(s)$$
$$B(\sigma_{-}) = B(\sigma) \cup B(\sigma')$$
$$\sum_{s \in B(\sigma_{-})} = \sum_{s \in B(\sigma)} + \sum_{s \in B(\sigma')}$$
$$s - \sigma_{-} = (s - \sigma) + (\sigma - \sigma_{-})$$
$$(s - \sigma_{-})^{\alpha} = \sum_{\beta = 0}^{\alpha} \binom{\alpha}{\beta} (s - \sigma)^{\beta} (\sigma - \sigma_{-})^{\alpha - \beta}.$$

... Wave Magic Algebra Wand ...

$$D(\tau_+, \sigma_{-}, \alpha) = \sum_{\beta = 0}^{\alpha} \frac{\left(-\sqrt{-1}\right)^{\alpha - \beta}}{(\alpha - \beta)!} \left[(\sigma - \sigma_{-})^{\alpha - \beta} D(\tau_+, \sigma, \beta) + (\sigma' - \sigma_{-})^{\alpha - \beta} D(\tau_+, \sigma', \beta)\right]$$

In [35]:
from make_partition_plots import show_sigma_coarsen

show_sigma_coarsen()


$$\Delta_S = \sigma' - \sigma_{-} = \sigma_{-} - \sigma = \frac{1}{2} \frac{\max(s) - \min(s)}{2^{L - \ell}} = \frac{1}{2} \frac{W_S}{2^{L - \ell}}$$
$$D(\tau_+, \sigma_{-}, \alpha) = \sum_{\beta = 0}^{\alpha} \frac{\left(-\sqrt{-1}\right)^{\alpha - \beta}}{(\alpha - \beta)!} (\sigma - \sigma_{-})^{\alpha - \beta} D(\tau_+, \sigma, \beta) + \cdots $$
$$D(\tau_+, \sigma_{-}, 0) = \left[\frac{\left(\Delta_S \sqrt{-1}\right)^{0}}{(0)!} D(\tau_+, \sigma, 0)\right] + \cdots$$
$$D(\tau_+, \sigma_{-}, 1) = \left[\frac{\left(\Delta_S \sqrt{-1}\right)^{1}}{(1)!} D(\tau_+, \sigma, 0) + \frac{\left(\Delta_S \sqrt{-1}\right)^{0}}{(0)!} D(\tau_+, \sigma, 1)\right] + \cdots $$
$$D(\tau_+, \sigma_{-}, \alpha) = \sum_{\beta = 0}^{\alpha} \frac{\left(-\sqrt{-1}\right)^{\alpha - \beta}}{(\alpha - \beta)!} (\sigma - \sigma_{-})^{\alpha - \beta} D(\tau_+, \sigma, \beta) + \cdots $$
$$A_2(\Delta) = \left[\begin{matrix} \frac{\left(-\Delta \sqrt{-1}\right)^{0}}{0!} & & & \\ \frac{\left(-\Delta \sqrt{-1}\right)^{1}}{1!} & \frac{\left(-\Delta \sqrt{-1}\right)^{0}}{0!} & & \\ \vdots & \vdots & \ddots & \\ \frac{\left(-\Delta \sqrt{-1}\right)^{M - 1}}{(M - 1)!} & \frac{\left(-\Delta \sqrt{-1}\right)^{M - 2}}{(M - 2)!} & \cdots & \frac{\left(-\Delta \sqrt{-1}\right)^{0}}{0!} \\ \end{matrix}\right]$$
$$D(\tau_{+}, \sigma_{-}, \, \cdot \,) = A_2(-\Delta_S) D(\tau_{+}, \sigma, \, \cdot \,) + A_2(\Delta_S) D(\tau_{+}, \sigma', \, \cdot \,)$$

In [36]:
def complex_eye(M):
    return np.eye(M, dtype=np.complex128)


def set_diag(val, A, M, diag):
    first_row = max(0, -diag)
    last_row = min(M, M - diag)
    for row in xrange(first_row, last_row):
        A[row, row + diag] = val


def A2(M, delta, eye_func=complex_eye, imag=1.0j):
    result = eye_func(M)
    new_delta = -imag * delta

    diagonal_value = 1
    for sub_diagonal in xrange(1, M):
        diagonal_value = diagonal_value * new_delta / sub_diagonal
        set_diag(diagonal_value, result, M, -sub_diagonal)

    return result

In [37]:
A2_val = A2(5, 1.0j)
print 'Frobenius norm of imaginary part:', np.linalg.norm(np.imag(A2_val), ord='fro')
print '-' * 40
A2_val = np.real(A2_val)
print A2_val


Frobenius norm of imaginary part: 0.0
----------------------------------------
[[ 1.          0.          0.          0.          0.        ]
 [ 1.          1.          0.          0.          0.        ]
 [ 0.5         1.          1.          0.          0.        ]
 [ 0.16666667  0.5         1.          1.          0.        ]
 [ 0.04166667  0.16666667  0.5         1.          1.        ]]

In [38]:
A2_symb = A2(5, sympy.Symbol('Delta'),
             eye_func=sympy.eye, imag=sympy.I)

display.display(display.Math(sympy.latex(A2_symb)))


$$\left[\begin{matrix}1 & 0 & 0 & 0 & 0\\- i \Delta & 1 & 0 & 0 & 0\\- \frac{\Delta^{2}}{2} & - i \Delta & 1 & 0 & 0\\\frac{i \Delta^{3}}{6} & - \frac{\Delta^{2}}{2} & - i \Delta & 1 & 0\\\frac{\Delta^{4}}{24} & \frac{i \Delta^{3}}{6} & - \frac{\Delta^{2}}{2} & - i \Delta & 1\end{matrix}\right]$$

In [39]:
A_update(A2_val, scale_multiplier=2.0, upper_diags=False)
print A2_val


[[ 1.          0.          0.          0.          0.        ]
 [ 2.          1.          0.          0.          0.        ]
 [ 2.          2.          1.          0.          0.        ]
 [ 1.33333333  2.          2.          1.          0.        ]
 [ 0.66666667  1.33333333  2.          2.          1.        ]]

In [40]:
A_update(A2_val, scale_multiplier=-1.0, upper_diags=False)
print A2_val


[[ 1.          0.          0.          0.          0.        ]
 [-2.          1.          0.          0.          0.        ]
 [ 2.         -2.          1.          0.          0.        ]
 [-1.33333333  2.         -2.          1.          0.        ]
 [ 0.66666667 -1.33333333  2.         -2.          1.        ]]

Remember This

$$\left[\begin{matrix} D(\tau_{+}', \sigma_{-}, \, \cdot \,) \\ D(\tau_{+}'', \sigma_{-}, \, \cdot \,) \end{matrix}\right] = \left[\begin{matrix} A_2(-\Delta_S) & A_2(\Delta_S) & 0 & 0\\0 & 0 & A_2(-\Delta_S) & A_2(\Delta_S)\end{matrix}\right] \left[\begin{matrix} D(\tau_{+}', \sigma, \, \cdot \,) \\ D(\tau_{+}', \sigma', \, \cdot \,) \\ D(\tau_{+}'', \sigma, \, \cdot \,) \\ D(\tau_{+}'', \sigma', \, \cdot \,) \end{matrix}\right]$$
$$\left[\begin{matrix} D(\tau, \sigma, \, \cdot \,) \\ D(\tau, \sigma', \, \cdot \,) \end{matrix}\right] \rightarrow \left[\begin{matrix} D(\tau_{+}', \sigma, \, \cdot \,) \\ D(\tau_{+}', \sigma', \, \cdot \,) \\ D(\tau_{+}'', \sigma, \, \cdot \,) \\ D(\tau_{+}'', \sigma', \, \cdot \,) \end{matrix}\right] \rightarrow \left[\begin{matrix} D(\tau_+', \sigma_{-}, \, \cdot \,) \\ D(\tau_+'', \sigma_{-}, \, \cdot \,) \end{matrix}\right]$$
$$\left[\begin{matrix} K(-\Delta_T, \sigma) A_1(-\Delta_T) & 0 \\ 0 & K(-\Delta_T, \sigma') A_1(-\Delta_T) \\ K(\Delta_T, \sigma) A_1(\Delta_T) & 0 \\ 0 & K(\Delta_T, \sigma') A_1(\Delta_T) \end{matrix}\right]$$
$$\left[\begin{matrix} A_2(-\Delta_S) & A_2(\Delta_S) & 0 & 0\\0 & 0 & A_2(-\Delta_S) & A_2(\Delta_S)\end{matrix}\right]$$

In [41]:
a, b, c, d, e, f = sympy.symbols('a b c d e f')
M_left = sympy.Matrix([[a, b, 0, 0], [0, 0, a, b]])
M_right = sympy.Matrix([[c, 0], [0, d], [e, 0], [0, f]])

message = sympy.latex(M_left) + sympy.latex(M_right)
display.display(display.Math(message))


$$\left[\begin{matrix}a & b & 0 & 0\\0 & 0 & a & b\end{matrix}\right]\left[\begin{matrix}c & 0\\0 & d\\e & 0\\0 & f\end{matrix}\right]$$

In [42]:
M_prod = M_left * M_right
message = sympy.latex(M_prod)
display.display(display.Math(message))


$$\left[\begin{matrix}a c & b d\\a e & b f\end{matrix}\right]$$
$$\left[\begin{matrix} K(-\Delta_T, \sigma) A_2(-\Delta_S) A_1(-\Delta_T) & K(-\Delta_T, \sigma') A_2(\Delta_S) A_1(-\Delta_T) \\ K(\Delta_T, \sigma) A_2(-\Delta_S) A_1(\Delta_T) & K(\Delta_T, \sigma') A_2(\Delta_S) A_1(\Delta_T) \end{matrix}\right]$$

In [43]:
from itertools import izip

def increase_tau_refinement(values, num_tau, num_sigma, update_func):
    result = []
    for tau_index in xrange(num_tau):
        begin = tau_index * num_sigma
        end = begin + num_sigma
        # We need to hold the right values until all the left values
        # have been added.
        right_updated = []

        # Assumes num_sigma is even.
        left_vals, right_vals = values[begin:end:2], values[1 + begin:end:2]
        for left_val, right_val in izip(left_vals, right_vals):
            new_left, new_right = update_func(left_val, right_val)
            result.append(new_left)
            right_updated.append(new_right)

        result.extend(right_updated)

    return result

In [44]:
def custom_update(left_val, right_val):
    tau, sigma1 = left_val
    tau2, sigma2 = right_val
    new_sigma = sigma1 >> 1
    if tau != tau2 or new_sigma != (sigma2 >> 1):
        raise ValueError(left_val, right_val)
    new_left = (2 * tau, new_sigma)
    new_right = (2 * tau + 1, new_sigma)
    return new_left, new_right

In [45]:
num_tau = 2
num_sigma = 4
index_pairs1 = [(tau, sigma)
                for tau in xrange(num_tau)
                for sigma in xrange(num_sigma)]

index_pairs2 = increase_tau_refinement(
    index_pairs1, num_tau, num_sigma, custom_update)
index_pairs3 = increase_tau_refinement(
    index_pairs2, 2 * num_tau, num_sigma / 2, custom_update)

In [46]:
row_template = (r'\left(\tau_{%d}, \sigma_{%d}\right) & \rightarrow & '
                r'\left(\tau_{%d}, \sigma_{%d}\right) & \rightarrow & '
                r'\left(\tau_{%d}, \sigma_{%d}\right) \\')

latex_rows = [
    r'\begin{array}{ccccc}',
    r'\ell = 1 & & \ell = 2 & & \ell = 3 \\',
    r'\hline',
]

for (i1, j1), (i2, j2), (i3, j3) in zip(index_pairs1,
                                        index_pairs2, index_pairs3):
    latex_rows.append(row_template % (i1, j1, i2, j2, i3, j3))

latex_rows.append(r'\end{array}')

In [47]:
display.display(display.Math('\n'.join(latex_rows)))


$$\begin{array}{ccccc} \ell = 1 & & \ell = 2 & & \ell = 3 \\ \hline \left(\tau_{0}, \sigma_{0}\right) & \rightarrow & \left(\tau_{0}, \sigma_{0}\right) & \rightarrow & \left(\tau_{0}, \sigma_{0}\right) \\ \left(\tau_{0}, \sigma_{1}\right) & \rightarrow & \left(\tau_{0}, \sigma_{1}\right) & \rightarrow & \left(\tau_{1}, \sigma_{0}\right) \\ \left(\tau_{0}, \sigma_{2}\right) & \rightarrow & \left(\tau_{1}, \sigma_{0}\right) & \rightarrow & \left(\tau_{2}, \sigma_{0}\right) \\ \left(\tau_{0}, \sigma_{3}\right) & \rightarrow & \left(\tau_{1}, \sigma_{1}\right) & \rightarrow & \left(\tau_{3}, \sigma_{0}\right) \\ \left(\tau_{1}, \sigma_{0}\right) & \rightarrow & \left(\tau_{2}, \sigma_{0}\right) & \rightarrow & \left(\tau_{4}, \sigma_{0}\right) \\ \left(\tau_{1}, \sigma_{1}\right) & \rightarrow & \left(\tau_{2}, \sigma_{1}\right) & \rightarrow & \left(\tau_{5}, \sigma_{0}\right) \\ \left(\tau_{1}, \sigma_{2}\right) & \rightarrow & \left(\tau_{3}, \sigma_{0}\right) & \rightarrow & \left(\tau_{6}, \sigma_{0}\right) \\ \left(\tau_{1}, \sigma_{3}\right) & \rightarrow & \left(\tau_{3}, \sigma_{1}\right) & \rightarrow & \left(\tau_{7}, \sigma_{0}\right) \\ \end{array}$$

In [48]:
def make_update_func(A1_minus, A1_plus, A2_minus, A2_plus, delta_T):

    top_left, top_right = A2_minus.dot(A1_minus), A2_plus.dot(A1_minus)
    bottom_left, bottom_right = A2_minus.dot(A1_plus), A2_plus.dot(A1_plus)

    def update_func(left_val, right_val):
        tau, sigma, alpha_vals_left = left_val
        # We expect the pair to share tau, and don't check to avoid slowdown.
        tau_same, sigma_prime, alpha_vals_right = right_val

        sigma_minus = 0.5 * (sigma + sigma_prime)
        tau_left = tau - delta_T
        tau_right = tau + delta_T

        new_alpha_vals_left = (
            dft_kernel(-delta_T, sigma) * top_left.dot(alpha_vals_left) +
            dft_kernel(-delta_T, sigma_prime) * top_right.dot(alpha_vals_right)
        )
        new_alpha_vals_right = (
            dft_kernel(delta_T, sigma) * bottom_left.dot(alpha_vals_left) +
            (dft_kernel(delta_T, sigma_prime) *
             bottom_right.dot(alpha_vals_right))
        )

        new_left_val = (tau_left, sigma_minus, new_alpha_vals_left)
        new_right_val = (tau_right, sigma_minus, new_alpha_vals_right)
        return new_left_val, new_right_val

    return update_func

Drumroll...Putting it all together


In [49]:
def solve(s, t, data, L, M):
    min_t, max_t = np.min(t), np.max(t)
    min_s, max_s = np.min(s), np.max(s)
    num_bins = 2**L

    tau = 0.5 * (min_t + max_t)
    coeff_vals = create_initial_data(s, min_s, max_s,
                                     tau, data, num_bins, M)

    # ell = 0
    delta_S = (max_s - min_s) / (2.0 * num_bins)
    delta_T = (max_t - min_t) * 0.25
    num_tau, num_sigma = 1, num_bins

    A1_minus, A1_plus = A1(M, -delta_T), A1(M, delta_T)
    A2_minus, A2_plus = A2(M, -delta_S), A2(M, delta_S)

    for ell in xrange(1, L + 1):
        update_func = make_update_func(A1_minus, A1_plus, A2_minus,
                                       A2_plus, delta_T)
        coeff_vals = increase_tau_refinement(coeff_vals, num_tau,
                                             num_sigma, update_func)

        num_tau, num_sigma, delta_T = update_loop_vals(
            num_tau, num_sigma, delta_T,
            A1_minus, A1_plus, A2_minus, A2_plus)

    return compute_t_by_bins(t, min_t, max_t, coeff_vals, num_bins, M)

In [50]:
def update_loop_vals(num_tau, num_sigma, delta_T,
                     A1_minus, A1_plus, A2_minus, A2_plus):
    num_tau = num_tau * 2
    num_sigma = num_sigma / 2
    delta_T *= 0.5

    A_update(A1_plus, scale_multiplier=0.5, upper_diags=True)
    A_update(A1_minus, scale_multiplier=0.5, upper_diags=True)
    A_update(A2_plus, scale_multiplier=2.0, upper_diags=False)
    A_update(A2_minus, scale_multiplier=2.0, upper_diags=False)

    return num_tau, num_sigma, delta_T

In [51]:
N = blue_whale_call.size
N_vals = np.arange(N, dtype=np.float64)
t = 2 * np.pi * N_vals
s = N_vals / N

L = 10  # relax by 3 levels
M = 50

start = time.time()
soln = solve(s, t, blue_whale_call, L, M)
duration = time.time() - start

message = r'\text{Butterfly Duration: } %2.10f' % (duration,)
display.display(display.Math(message))


$$\text{Butterfly Duration: } 0.3382070065$$

In [52]:
message = r'\text{Recall, Actual Naive Duration: } %2.10f' % (naive_duration,)
display.display(display.Math(message))


$$\text{Recall, Actual Naive Duration: } 7.0778009892$$

In [53]:
error = np.linalg.norm(soln - dft_whale_call, ord=2)

sig, exponent = str(error).split('e')
expr = r'\|e\|_2 = %s \cdot 10^{%s}' % (sig, exponent)
display.display(display.Math(expr))


$$\|e\|_2 = 1.96158293529 \cdot 10^{-08}$$

In [54]:
# NOTE: This assumes s, t, blue_whale_call, dft_whale_call are globals at definition time

def get_time(L, M, s=s, t=t, data=blue_whale_call,
             true_soln=dft_whale_call):
    start = time.time()
    soln = solve(t, s, data, L, M=M)
    duration = time.time() - start
    error = np.linalg.norm(soln - true_soln, ord=2)
    return error, duration

In [55]:
L10_M_choices = (10, 18, 26, 34, 42, 50, 58)
L10_time_pairs = [get_time(L=10, M=M) for M in L10_M_choices]
L11_M_choices = (5, 10, 15, 20, 25, 30, 35)
L11_time_pairs = [get_time(L=11, M=M) for M in L11_M_choices]
L12_M_choices = (4, 8, 12, 16, 20, 24, 28)
L12_time_pairs = [get_time(L=12, M=M) for M in L12_M_choices]
L13_M_choices = (3, 6, 9, 12, 15, 18, 21)
L13_time_pairs = [get_time(L=12, M=M) for M in L13_M_choices]

In [56]:
from make_partition_plots import make_time_plots


make_time_plots(L10_M_choices, L11_M_choices, L12_M_choices, L13_M_choices,
                L10_time_pairs, L11_time_pairs, L12_time_pairs, L13_time_pairs)


Error Analysis

Two truncations: $$K(t, s) \approx K(\tau, \sigma) K(t - \tau, \sigma) K(\tau, s - \sigma) \sum_{\alpha = 0}^{M - 1} \frac{\left(- \sqrt{-1}\right)^{\alpha}}{\alpha!} (t - \tau)^{\alpha} \left(s - \sigma\right)^{\alpha}$$ and $$D(\tau_+, \sigma, \alpha) \approx K(\tau_+ - \tau, \sigma) \sum_{\beta : \alpha + \beta < M} \binom{\alpha + \beta}{\alpha} \left(\tau_+ - \tau\right)^{\beta} D(\tau, \sigma, \alpha + \beta)$$ We need to understand how these errors propagate through our solution.

Error Analysis

The rest, a conversation for another day...

Thanks!


In [57]:
# Custom styling, borrowed from https://github.com/ellisonbg/talk-2013-scipy

import os


MAKE_SLIDES = os.getenv('MAKE_BUTTERFLY_SLIDES') == 'True'

# Make plots centered. H/T to http://stackoverflow.com/a/27168595/1068170
STYLE = """\
<style>
div.output_area, .ui-wrapper {
  margin-left: auto !important;
  margin-right: auto !important;
}
</style>
"""


SLIDES_STYLE = """
<style>

.rendered_html {
    font-family: "proxima-nova", helvetica;
    font-size: 150%;
    line-height: 1.3;
}

.rendered_html h1 {
    margin: 0.25em 0em 0.5em;
    color: #015C9C;
    text-align: center;
    line-height: 1.2; 
    page-break-before: always;
}

.rendered_html h2 {
    margin: 1.1em 0em 0.5em;
    color: #26465D;
    line-height: 1.2;
}

.rendered_html h3 {
    margin: 1.1em 0em 0.5em;
    color: #002845;
    line-height: 1.2;
}

.rendered_html li {
    line-height: 1.5; 
}

.prompt {
    font-size: 120%; 
}

.CodeMirror-lines {
    font-size: 120%; 
}

.output_area {
    font-size: 120%; 
}

#notebook {
    background-image: url('files/images/witewall_3.png');
}

h1.bigtitle {
    margin: 4cm 1cm 4cm 1cm;
    font-size: 300%;
}

h3.point {
    font-size: 200%;
    text-align: center;
    margin: 2em 0em 2em 0em;
    #26465D
}

.logo {
    margin: 20px 0 20px 0;
}

a.anchor-link {
    display: none;
}

h1.title { 
    font-size: 250%;
}

</style>
"""

if MAKE_SLIDES:
    STYLE = STYLE + '\n' + SLIDES_STYLE


display.display(display.HTML(STYLE))



In [58]:
%%file make_partition_plots.py
from matplotlib import pyplot as plt
import numpy as np


N_exp = 4  # 8 boxes
N = 2**N_exp
# import seaborn
# seaborn.husl_palette(n_colors=7)
SEABORN_COLORS = [
    [0.9677975592919913, 0.44127456009157356, 0.5358103155058701],
    [0.7757319041862729, 0.5784925270759935, 0.19475566538551875],
    [0.5105309046900421, 0.6614299289084904, 0.1930849118538962],
    [0.20433460114757862, 0.6863857739476534, 0.5407103379425205],
    [0.21662978923073606, 0.6676586160122123, 0.7318695594345369],
    [0.5049017849530067, 0.5909119231215284, 0.9584657252128558],
    [0.9587050080494409, 0.3662259565791742, 0.9231469575614251],
]


def remove_axis_frame(ax):
    ax.set_frame_on(False)
    ax.yaxis.set_label_position('right')
    ax.set_xticklabels([])
    # H/T: http://stackoverflow.com/a/20416681/1068170
    for tic in ax.xaxis.get_major_ticks():
        tic.tick1On = tic.tick2On = False
    ax.set_yticklabels([])
    for tic in ax.yaxis.get_major_ticks():
        tic.tick1On = tic.tick2On = False


def make_1D_centers(L=N_exp, s_values=None):
    rows, cols = 1, 1
    fig, ax = plt.subplots(rows, cols)

    N = 2**L

    all_sigma = np.linspace(0, 1, N + 1)
    ax.plot(all_sigma, np.zeros(all_sigma.shape),
            color='black', marker='|', markersize=20)

    if s_values is not None:
        ax.plot(s_values, np.zeros(s_values.shape),
                color='blue', marker='o', linestyle='None')

    center_pts = all_sigma[:-1] + 0.5 / N
    ax.plot(center_pts, np.zeros(center_pts.shape),
            color='red', marker='o', linestyle='None')

    for i, x_val in enumerate(center_pts):
        label = r'$\sigma_{%d}$' % (i,)
        ax.annotate(label, xy=(x_val, -0.05), xytext=(x_val - 0.01, -0.04),
                    fontsize=24)

    remove_axis_frame(ax)
    ax.axis('scaled')
    ax.set_xlim(-0.05, 1.05)
    ax.set_ylim(-0.05, 0.05)

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    plt.show()


def add_labeled_M(ax, time_only, log_errors, M_choices, loc='upper right'):
    for x, y, M, c in zip(time_only, log_errors, M_choices, SEABORN_COLORS):
        ax.plot([x], [y], marker='o', label=r'$M = %d$' % (M,), color=c)
    ax.legend(loc=loc)


def make_time_plots(L10_M_choices, L11_M_choices, L12_M_choices, L13_M_choices,
                    L10_time_pairs, L11_time_pairs, L12_time_pairs,
                    L13_time_pairs):
    rows = cols = 2
    fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(rows, cols)

    L10_log_errors = [np.log10(pair[0]) for pair in L10_time_pairs]
    L11_log_errors = [np.log10(pair[0]) for pair in L11_time_pairs]
    L12_log_errors = [np.log10(pair[0]) for pair in L12_time_pairs]
    L13_log_errors = [np.log10(pair[0]) for pair in L13_time_pairs]

    L10_time_only = [pair[1] for pair in L10_time_pairs]
    L11_time_only = [pair[1] for pair in L11_time_pairs]
    L12_time_only = [pair[1] for pair in L12_time_pairs]
    L13_time_only = [pair[1] for pair in L13_time_pairs]

    true_min = np.min(L10_log_errors + L11_log_errors +
                      L12_log_errors + L13_log_errors)
    true_min -= 1.0
    true_max = np.max(L10_log_errors + L11_log_errors +
                      L12_log_errors + L13_log_errors)
    true_max += 1.3
    true_left = np.min(L10_time_only + L11_time_only +
                       L12_time_only + L13_time_only)
    true_left -= 0.1
    true_right = np.max(L10_time_only + L11_time_only +
                        L12_time_only + L13_time_only)
    true_right += 0.1

    ax1.plot(L10_time_only, L10_log_errors, color='black')
    ax1.set_ylabel(r'$\log_{10} ||e||_2$', rotation=0, fontsize=20, labelpad=40)
    ax1.set_title(r'$L = 10$', fontsize=20)
    ax1.set_xlim((true_left, true_right))
    ax1.set_ylim((true_min, true_max))

    add_labeled_M(ax1, L10_time_only, L10_log_errors, L10_M_choices)

    ax2.plot(L11_time_only, L11_log_errors, color='black')
    ax2.set_title(r'$L = 11$', fontsize=20)
    ax2.set_xlim((true_left, true_right))
    ax2.set_ylim((true_min, true_max))

    add_labeled_M(ax2, L11_time_only, L11_log_errors, L11_M_choices)

    ax3.plot(L12_time_only, L12_log_errors, color='black')
    ax3.set_ylabel(r'$\log_{10} ||e||_2$', rotation=0, fontsize=20, labelpad=40)
    ax3.set_xlabel('runtime', fontsize=20)
    ax3.set_title(r'$L = 12$', fontsize=20)
    ax3.set_xlim((true_left, true_right))
    ax3.set_ylim((true_min, true_max))

    add_labeled_M(ax3, L12_time_only, L12_log_errors,
                  L12_M_choices, loc='upper left')

    ax4.plot(L13_time_only, L13_log_errors, marker='o')
    ax4.set_title(r'$L = 13$', fontsize=20)
    ax4.set_xlabel('runtime', fontsize=20)
    ax4.set_xlim((true_left, true_right))
    ax4.set_ylim((true_min, true_max))

    add_labeled_M(ax4, L13_time_only, L13_log_errors,
                  L13_M_choices, loc='upper left')

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    plt.show()


def make_butterfly(L=4, level=None):
    rows, cols = 1, 1
    fig, ax = plt.subplots(rows, cols)

    if level is not None:
        s_index, t_index = L - level, level
        ax.set_title(r'$L = %d, \, \ell = %d$' % (L, L - s_index),
                     fontsize=20)

    heights = np.linspace(0, 1, L + 1)

    x_vals = np.array([0.0, 1.0])  # Left and right half.
    for i in xrange(L + 1):
        j = L - i

        begin = np.array([0.0, heights[i]])
        end = np.array([1.0, heights[j]])
        quarter_dir = 0.25 * (end - begin)
        dx, dy = quarter_dir
        ax.arrow(dx, heights[i] + dy, 2 * dx, 2 * dy,
                 length_includes_head=True, color='blue')
        ax.arrow(0.0, heights[i], dx, dy,
                 length_includes_head=False, color='black')
        ax.plot([3 * dx, 1.0], [heights[i] + 3 * dy, heights[j]],
                color='black', linewidth=1.5)

    if level is not None:
        s_boundaries = np.linspace(-0.55, -0.05, 2**s_index + 1)
        ax.plot(s_boundaries,
                heights[s_index] * np.ones(2**s_index + 1),
                color='black', marker='|', markersize=20)
        s_width = s_boundaries[1] - s_boundaries[0]
        ax.plot(s_width * 0.5 + s_boundaries[:-1],
                heights[s_index] * np.ones(2**s_index),
                color='red', marker='o', linestyle='None')
        t_boundaries = np.linspace(1.05, 1.55, 2**t_index + 1)
        ax.plot(t_boundaries,
                heights[t_index] * np.ones(2**t_index + 1),
                color='black', marker='|', markersize=20)
        t_width = t_boundaries[1] - t_boundaries[0]
        ax.plot(t_width * 0.5 + t_boundaries[:-1],
                heights[t_index] * np.ones(2**t_index),
                color='red', marker='o', linestyle='None')

    remove_axis_frame(ax)

    ax.axis('scaled')

    if level is None:
        ax.set_xlim(0, 1)
        ax.set_ylim(0, 1)
    else:
        ax.set_xlim(-0.55, 1.55)
        ax.set_ylim(-0.1, 1.1)

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    plt.show()


def get_random_intervals(N):
    interval_width = 1.0 / N
    interval_starts = np.linspace(0, 1 - interval_width, N)
    L = interval_starts + interval_width * np.random.random(N)
    R = interval_starts + interval_width * np.random.random(N)
    return L, R


def naive_interaction(L, R, N=16, return_fig=False):
    rows, cols = 1, 1
    fig, ax = plt.subplots(rows, cols)

    N = N or len(L)
    ax.set_title(r'$N = %d$' % (N,), fontsize=20)

    for l_val in L:
        for r_val in R:
            ax.plot([0, 1], [l_val, r_val], color='blue')

    remove_axis_frame(ax)
    ax.annotate('$S$', xy=(0.0, 0.5), xytext=(-0.05, 0.5),
                fontsize=20)
    ax.annotate('$T$', xy=(1.0, 0.5), xytext=(1.025, 0.5),
                fontsize=20)
    ax.axis('scaled')

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    if return_fig:
        return ax, fig
    else:
        plt.show()


def binned_interaction(s_values, t_values, L=2):
    N = len(s_values)
    bin_width = 1.0 / 2**L
    sigma_values = 0.5 * bin_width + np.arange(2**L) * bin_width
    bin_ends = np.arange(2**L + 1) * bin_width
    ax, fig = naive_interaction(sigma_values, t_values,
                                N=N, return_fig=True)
    ax.plot(np.zeros(bin_ends.shape),
            bin_ends, marker='_', color='black',
            markersize=20)
    ax.plot(np.zeros(s_values.shape), s_values,
            color='blue', marker='o', linestyle='None')
    ax.plot(np.zeros(sigma_values.shape), sigma_values,
            color='red', marker='o', linestyle='None')
    ax.set_xlim(-0.1, 1.0)

    plt.show()


def show_tau_refine():
    rows, cols = 1, 1
    fig, ax = plt.subplots(rows, cols)

    ax.plot([-1, 1], [0, 0], color='black', marker='|', markersize=20)
    ax.plot([-1, 0, 1], [-0.2, -0.2, -0.2], color='black',
            marker='|', markersize=20)
    ax.plot([0], [0], color='red', marker='o', linestyle='None')
    ax.plot([-0.5, 0.5], [-0.2, -0.2], color='red',
            marker='o', linestyle='None')

    dx, dy = 0.5, -0.2
    ax.arrow(-dx * 0.1, dy * 0.1, -dx * 0.8, dy * 0.8,
             length_includes_head=True, color='blue')
    ax.arrow(dx * 0.1, dy * 0.1, dx * 0.8, dy * 0.8,
             length_includes_head=True, color='blue')

    remove_axis_frame(ax)

    ax.axis('scaled')
    ax.set_xlim(-1.1, 1.1)
    ax.set_ylim(-0.3, 0.1)

    ax.annotate(r'$\tau$', xy=(0.0, 0.0), xytext=(-0.02, 0.05),
                fontsize=24)
    ax.annotate(r"$\tau_{+}'$", xy=(0.0, 0.0), xytext=(-0.5 - 0.025, -0.325),
                fontsize=24)
    ax.annotate(r"$\tau_{+}''$", xy=(0.0, 0.0), xytext=(0.5 - 0.025, -0.325),
                fontsize=24)

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    plt.show()


def show_sigma_coarsen():
    rows, cols = 1, 1
    fig, ax = plt.subplots(rows, cols)

    ax.plot([-1, 1], [-0.2, -0.2], color='black', marker='|', markersize=20)
    ax.plot([-1, 0, 1], [0, 0, 0], color='black',
            marker='|', markersize=20)
    ax.plot([0], [-0.2], color='red', marker='o', linestyle='None')
    ax.plot([-0.5, 0.5], [0, 0], color='red',
            marker='o', linestyle='None')

    dx, dy = 0.5, -0.2
    ax.arrow(-0.5 + dx * 0.1, dy * 0.1, dx * 0.8, dy * 0.8,
             length_includes_head=True, color='blue')
    ax.arrow(0.5 - dx * 0.1, dy * 0.1, -dx * 0.8, dy * 0.8,
             length_includes_head=True, color='blue')

    remove_axis_frame(ax)

    ax.axis('scaled')
    ax.set_xlim(-1.1, 1.1)
    ax.set_ylim(-0.3, 0.1)

    ax.annotate(r'$\sigma_{\minus}$', xy=(0.0, 0.0), xytext=(-0.02, -0.275),
                fontsize=24)
    ax.annotate(r'$\sigma$', xy=(0.0, 0.0), xytext=(-0.5 - 0.025, 0.05),
                fontsize=24)
    ax.annotate(r"$\sigma'$", xy=(0.0, 0.0), xytext=(0.5 - 0.025, 0.05),
                fontsize=24)

    width, height = fig.get_size_inches()
    fig.set_size_inches(2 * width, 2 * height, forward=True)

    plt.show()


Overwriting make_partition_plots.py