Day 2

6. Using numpy and scipy tools

Ex 6.1

First let's import some standard preliminary modules:


In [1]:
from scipy.interpolate import *
import numpy as np
from matplotlib import pylab as plt
%matplotlib inline

Let's pass on to some definitions.

We define two functions to get quickly equispaced and Chebyshev nodes in $[0,1]$ (which will be useful also later), and, as required, qs as a list containing both of these arrays of nodes:


In [2]:
def equiNodes(N):
    "returns an array of N equispaced nodes in [0,1]"
    return np.linspace(0, 1, N)

def ChebNodes(N):
    """returns a sorted array of N Chebyshev nodes in [0,1],
    0 and 1 included"""
    M=N-2
    return np.asarray([0] + [np.cos(np.pi * (2*x-1)/(2*M)) /2 +.5 
                       for x in np.arange(1, M+1, 1)][::-1] + [1])
    # we include extrema of the interval because library interpolators
    # are defined only in between interpolation points; this way one can
    # evaluate interpolations without having to restrict to subintervals.

def qs(N):
    """returns a list containing as first argument an array of
    N equispaced points in [0,1], as second argument an array
    of N Chebyshev points in (0,1)"""
    return [equiNodes(N), ChebNodes(N)]

We define a big dictionary (the final results are more easily and clearly expressed using a dictionary structure) containing all the library interpolating functions we will be dealing with:


In [3]:
def interpDict(f, q):
    "returns a dictionary containing all the required interpolatory functions"
    return {"interp1d" : interp1d(q, f(q)), 
            "BarycentricInterpolator" : BarycentricInterpolator(q, f(q)), 
            "KroghInterpolator" : KroghInterpolator(q, f(q)),
            #PiecewisePolynomial(q, zip(q, Derivativef(f(q)))),
            "PchipInterpolator" : PchipInterpolator(q, f(q)),
            "Akima1DInterpolator" : Akima1DInterpolator(q, f(q)),
            #PPoly(q, f(q)),
            #BPoly(q, f(q)),
            "UnivariateSpline" : UnivariateSpline(q, f(q)),
            "InterpolatedUnivariateSpline" : InterpolatedUnivariateSpline(q, f(q)),
            "lagrange" : lagrange(q, f(q))}

dummy = interpDict(lambda x:x, equiNodes(4))
interpNames = dummy.keys() # We save the names of the interpolators for future use

Ex 6.2

We define a function which returns a dictionary with the sample function we are interested in:


In [4]:
def fDict(x) :
    """returns a dictionary which contains the functions shifted
    absolute value, runge and a sinusoidal, accessible with double keys"""
    return {"shiftedAbs" : abs(x-.5),         0: abs(x-.5),
            "runge" : 1/(1 + (10*x - 5)**2),  1: 1/(1 + (10*x - 5)**2),
            "sinusoidal" : np.sin(2*np.pi*x), 2: np.sin(2*np.pi*x)}

First we sketch in the notebook some interpolations for the sample functions. For equispaced points we get:


In [5]:
pltGrid = equiNodes(513)
NMin, NMax = 4, 20

for N in range(NMin,NMax,5):
    for n in range(3):
        currentF = lambda x : fDict(x)[n]
        interpDictEval = interpDict(currentF, equiNodes(N))
        for i in interpNames:
            interp = interpDictEval[i]
            y = interp(pltGrid)
            plt.subplot(1,3,n)
            plt.title(str(N)+" equisp nodes for "+str((fDict(0).keys())[n+3]))
            plt.plot(pltGrid,y)
    plt.show()
    plt.close()


And now for Chebyshev points:


In [6]:
for N in range(NMin,NMax,5):
    for n in range(3):
        currentF = lambda x : fDict(x)[n]
        interpDictEval = interpDict(currentF, ChebNodes(N))
        for i in interpNames:
            interp = interpDictEval[i]
            y = interp(pltGrid)
            plt.subplot(1,3,n)
            plt.title(str(N)+" Cheb nodes for "+str((fDict(0).keys())[n+3]))
            plt.plot(pltGrid,y)
    plt.show()
    plt.close()


For completeness, in the following algorithm we save the single plots as files (which can be found in the subfolder /images) with the notation

image-_interpolation method_-j-N-_function name_.png
where

  • j is 0 for equispaced points and 1 for Chebyshev points,
  • N is the number of grid nodes.

In [7]:
NMin, NMax = 10, 51

for j in range(0,2):
    for N in range(NMin,NMax):
        for n in range(3):
            interpDictEval = interpDict(lambda x : fDict(x)[n], qs(N)[j])
            for i in interpDictEval.keys():
                interp = interpDictEval[i]
                y = interp(pltGrid)
                plt.plot(pltGrid,y)
                plt.savefig("images/image-"+ str(i) +"-"+ str(j) +"-"+ str(N) +"-"+ str((fDict(0).keys())[n+3]) +".png")
                plt.close()

Ex 6.3

We calculate an approximation of the $L^{\infty}$ norm of the error, i.e. of the difference between our sample function and its interpolation.

We know that for equispaced points the error will be maximum near the extrema of the interval; knowing this, we try to get the best approximation of the max calculating the value in a finite grid.


In [16]:
searchDepth = 2**12 +1 # how thick will be the grid for the max search
xL = len(interpDictEval)

minError = [{name : 32767 for name in interpNames} for j in range(3)]
# we will also store the minimum error in this list for later use

approxPoints = equiNodes(searchDepth)
# the points where we perform the max search

out_file = open("ErrorValues.txt","w")
out_file.write("EQUISPACED POINTS\n\n")
# we collect for completeness our results in an output file
# (they would be uncomfortably long to display here)

NMin, NMax = 6, 51

In [17]:
for n in range(3):
    currentF = lambda x : fDict(x)[n]
    for N in range(NMin,NMax):
        interpDictEval = interpDict(currentF, equiNodes(N))
        for i in interpNames:    
            interp = interpDictEval[i]
            error = abs(currentF(approxPoints) - interp(approxPoints))
            maxError = max(error)
            out_file.write("{0:11} {1:28} {2:6} {3:2} {4:2}"
                               .format(str(fDict(0).keys()[n+3]), str(i), 
                                       " N="+str(N), ":", str(maxError)+"\n"))
            minError[n][i] = min(minError[n][i], maxError)
            
out_file.close()

Let's plot the minimum errors:


In [18]:
for n in range(3):
    plt.title(fDict(0).keys()[n+3])
    plt.xticks(range(xL), minError[n].keys(), rotation=70)
    plt.semilogy(range(xL), minError[n].values())
    plt.show()


The algorithms which, for certain values of N the number of interpolation nodes, are best $L^{\infty}$ approximation of the sample functions are

  • shiftedAbs: interp1d (which attains value ~0 for N odd)
  • runge: InterpolatedUnivariateSpline
  • sinusoidal: BarycentricInterpolator, KroghInterpolator

Note: a look at ErrorValues.txt, reveals that some of these algorithm are indeed a good approximation for a certain N, but as N increases the error increases (for example, lagrange) or even varies for N odd/even (for example interp1d).

We now calculate the $L^{\infty}$ norm for Chebyshev points (this times we use an equispaced grid for the estimate of the max):


In [19]:
approxPoints = equiNodes(searchDepth)
minError = [{name : 32767 for name in interpNames} for j in range(3)]

out_file = open("ErrorValues.txt","a")
out_file.write("\nCHEBYSHEV POINTS (extrema included)\n\n")

errorListCheb = np.array([{name : [0. for N in range(NMax-NMin)] for name in interpNames}
                          for n in range(3)])
# this time we keep all error values in memory for later use

NMin = 6
NMax = 51

In [20]:
for n in range(3):
    currentF = lambda x : fDict(x)[n]
    for N in range(NMin, NMax):
        interpDictEval = interpDict(currentF, ChebNodes(N))
        for i in interpNames:
            interp = interpDictEval[i]
            error = abs(currentF(approxPoints) - interp(approxPoints))
            maxError = max(error)
            errorListCheb[n][i][N-NMin] = maxError
            out_file.write("{0:11} {1:28} {2:6} {3:2} {4:2}"
                               .format(str(fDict(0).keys()[n+3]), str(i), 
                                       " N="+str(N), ":", str(maxError)+"\n"))
            minError[n][i] = min(minError[n][i], maxError)
            
out_file.close()

Again let's plot the minima:


In [21]:
for n in range(3):
    plt.title(fDict(0).keys()[n+3])
    plt.xticks(range(xL), minError[n].keys(), rotation=70)
    plt.semilogy(range(xL), minError[n].values())
    plt.show()


In this case the best approximations are still

  • shiftedAbs: interp1d
  • runge: KroghInterpolator, PchipInterpolator
  • sinusoidal: BarycentricInterpolator, KroghInterpolator

Ex 6.4

The average distance between $N$ points in $[0,1]$ is just $1/N$. We want to compute an approximation of the convergence rate, namely $$ r(N_1, N_2, e_{N_1}, e_{N_2}) := \left. \log \left( \dfrac{e_{N_2}}{e_{N_1}} \right) \middle/ \log \left( \dfrac{ N_2}{N_1} \right) \right. .$$


In [22]:
def r(N1, N2, eN1, eN2):
    return  (np.log(eN1 / eN2)) / np.log(N2 / N1)

We fix N1, N2 small, but a little far apart, and we calculate r for some values:


In [24]:
out_file = open("ConvergenceRates.txt","w")
out_file.write("CONVERGENCE RATES (Chebyshev, extrema included)\n\n")

for n in range(3):
    for i in interpNames:
        for N1s in range(0,5):
            for N2s in range(10,15):
                rC = r(N1s+NMin+0., N2s+NMin+0.,
                       errorListCheb[n][i][N1s], errorListCheb[n][i][N2s])
                out_file.write("{0:11} {1:28} {2:6} {3:6} {4:2} {5:2}"
                               .format(str(fDict(0).keys()[n+3]), str(i), " N1="+str(N1s+NMin), 
                                       " N2="+str(N2s+NMin), ":", str(rC)+"\n"))
        out_file.write("\n")
out_file.close()

The values obtained vary widely, depending also on N1, N2 being odd or even (you can check this in ConvergenceRates.txt). However they look quiet stable when N1 and N2 are both even; we restrict to this case and take their arithmetic average:


In [26]:
rMean = [{name : 0 for name in interpNames} for j in range(3)]

for n in range(3):
    for i in interpNames:
        for N1s in range(0,5,2):
            for N2s in range(10,15,2):
                rC = r(N1s+NMin+0., N2s+NMin+0., errorListCheb[n][i][N1s], errorListCheb[n][i][N2s])
                rMean[n][i] += rC/9
        print("{0:11} {1:28} {2:1} {3:1}".format(str(fDict(0).keys()[n+3]), str(i), ":", str(round(rMean[n][i],2))))


shiftedAbs  lagrange                     : 1.18
shiftedAbs  interp1d                     : 1.2
shiftedAbs  PchipInterpolator            : 1.2
shiftedAbs  InterpolatedUnivariateSpline : 1.16
shiftedAbs  Akima1DInterpolator          : 1.2
shiftedAbs  BarycentricInterpolator      : 1.18
shiftedAbs  UnivariateSpline             : 0.14
shiftedAbs  KroghInterpolator            : 1.18
runge       lagrange                     : 2.33
runge       interp1d                     : 1.43
runge       PchipInterpolator            : 1.43
runge       InterpolatedUnivariateSpline : 2.06
runge       Akima1DInterpolator          : 1.95
runge       BarycentricInterpolator      : 2.33
runge       UnivariateSpline             : 0.22
runge       KroghInterpolator            : 2.33
sinusoidal  lagrange                     : 5.19
sinusoidal  interp1d                     : 2.17
sinusoidal  PchipInterpolator            : 2.34
sinusoidal  InterpolatedUnivariateSpline : 4.89
sinusoidal  Akima1DInterpolator          : 1.96
sinusoidal  BarycentricInterpolator      : 26.78
sinusoidal  UnivariateSpline             : 0.16
sinusoidal  KroghInterpolator            : 25.83

Note that BarycentricInterpolator and KroghInterpolator converge extremely fast for the sinusoid, and they have a decent convergence rate also for the other functions. It appears also that UnivariateSpline isn't made to minimize the max norm.

In general the methods converges much faster for analytic functions (the sinusoid), slower for smooth non analytic (runge), and very slow for functions with singularities (the shifted absolute value).