While to interpolate a function we had to choose only (the interpolation algorithm and) the nodes, for quadratures we have also to choose the associated weights. However the choices of weights and nodes are strictly linked; with this idea in mind - and with the capabilities of an object oriented language - we create some higher structures - effectively, classes - to handle both nodes and weights together. In particular, we will use a dictionary, where we will store nodes as keys and weights as values.
The work done "behind the curtains" is contained in the following module:
In [1]:
import sys
sys.path.append('/home/alex/NumAnEx2014/Ex03')
from myModule import *
Importing the module we import the scipy and numpy tools we will need in the following exercises, but also three custom classes:
To explain some of the details of the behaviors of these classes, we report hereafter their documentations (to get to the exercises, you can just skip to the next section):
class _Nodes:
In [ ]:
""" Represents nodes as keys of a dictionary (in the subclass WNodes the values
become weights associated to the node). Being this only a blueprint for
WNodes, it is advised to use the latter.
__init__ handles arguments with the following behavior:
$wnodes is expected to be a dictionary - in which case is copied into a
variable - or an iterable - is converted into a dictionary with uniform
values None - or another _Nodes object - keys are copied, values set None;
$a, $b should be the extrema of the interval in which all nodes are in;
%equi is an optional argument used to generate equispaced nodes in [a,b] -
it is expected to be a dictionary containing a key nNodes with value
the number of nodes, and an optional key extrIncl with value True or False
depending if extrema should be or should not included in the nodes.
$cheb is an optional argument used to generate Chebyshev nodes in [a,b],
which is expected to be a dictionary, which works as in $equi.
The class contains also some "static" methods for general purpose nodes
generation: eqNodes - which return a list of N equispaced nodes in [a,b] -
and chebNodes - which returns a list of N Chebyshev nodes in [a,b]. """
class WNodes:
In [ ]:
""" Subclass of _Nodes, represents nodes and weights respectively as keys and
values in a single dictionary $wnodes.
__init__ handles arguments as in _Nodes, with the addition of the optional
argument %weights - when %weights is passed, the constructor tries to
associate to the nodes the weights specified in $weights.
The class contains also the methods iterWNodes - which rescales nodes and
weights by a factor 1/N - and the utility method uniformWeights - which
sets all weights associated to current nodes to %val. """
class Quad:
In [ ]:
""" Subclass of WNodes, it is useful for weights calculations using specified
rules (e.g. Newton-Cotes) and quadrature of funcions.
__init__ handles arguments as in _WNodes, with the following additions:
if the weightsCalc argument is passed with value val, the constructor
calculates weights from nodes with rule val (currently supports only
"lagrange", which uses lagrange interpolants);
if the newtonCotes argument is passed, its value is expected to be a
dictionary containing the keys: %degree - which specifies the degree of the
rule - and %type - which can have value 'open' or 'closed'.
There are different additional methods:
I, given a function, returns its quadrature computed at $wnodes;
lagrangeWeight returns the definite integral from $a to $b of a lagrange
basis polynomial;
the methods lagrangeBasis, lagrangeWeights, calcWeightsLagrange, calculate
lagrange interpolants, integrate them and use them to get the weights
starting from known nodes;
the "static" method legendreP returns the %N-th Legendre polynomial;
the "static" methods peanoArg, peanoInt return resp. the %k-th Peano
argument function and its integral from $a to $b;
peanoKer returns the %k-th Peano kernel on [$a, $b] evaluated in %t. """
In [ ]:
## from Quad, not executable.
def I(self, f):
"Returns the quadrature of the function f in $wnodes"
return sum(f(q)*w for q,w in self.wnodes.items())
In [ ]:
## from Quad, not executable.
def lagrangeBasis(self, x):
"Returns lagrange interpolant which has value 1 in x"
xGrid = self.getList()
yGrid = [0]*len(self.wnodes)
i = xGrid.index(x)
yGrid[i] = 1
return lagrange(xGrid, yGrid)
def lagrangeWeight(self,x):
"Returns the weight of the lagrange interpolant which has value 1 in x"
return (self.lagrangeBasis(x).integ()(self.b)-
self.lagrangeBasis(x).integ()(self.a))
def calcWeightsLagrange(self):
"Calculates the weights associated to the current nodes in $wnodes."
_wnodes = {}
for x in self.wnodes.keys():
_wnodes[x] = self.lagrangeWeight(x)
self.wnodes = _wnodes
In [ ]:
## from WNodes, not executable.
def iterWNodes(self, N):
"Rescales $wnodes by a factor 1/N"
a, b = self.a, self.b
if a in self.wnodes.keys() and b in self.wnodes.keys():
# We have to check if both extrema are included: in this case the weights
# in the images (through rescaling and translation) of the extrema inside
# (a,b) are the sum of the weights of the extrema.
weightA, weightB = self.wnodes[a], self.wnodes[b]
# translate into 0 the internal nodes and rescale them by 1/N
_wnodes = {(q-a)/N : w/N for q,w in self.wnodes.items()
if q!=a and q!=b}
# scale back into [a,b] the previous nodes
self.wnodes = {q + j*(b-a)/N +a: w for j in range(N)
for q,w in _wnodes.items()}
# rescale weights also in the extrema
self.wnodes.update({a : weightA/N, b : weightB/N})
# sum both weights in the nodes in (a,b) which, through rescaling,
# are image of a, b.
for j in range(1,N):
self.wnodes.update({a + j*(b-a)/N : (weightA + weightB)/N})
else:
_wnodes = {(q-a)/N : w/N for q,w in self.wnodes.items()}
self.wnodes = {q + j*(b-a)/N +a : w for j in range(N)
for q,w in _wnodes.items()}
In [2]:
%matplotlib inline
NMin = 3
NMax = 29
expFuncHalf = ["exp(-x) in [0,1]", (lambda x : np.exp(-x), 1-1/np.exp(1))]
rungeHalf = ["Runge function in [0,1]",
(lambda x : 1/(1 + (10*x - 5)**2), np.arctan(5)/5)]
In [3]:
NCotes, chebInterp, iterTrapez = [], [], []
for n in range(NMin,NMax):
# Closed Newton-Cotes formulas with n nodes
NCotes.append(
Quad(weightsCalc="lagrange", equi={"nNodes":n}))
# Chebyshev nodes
chebInterp.append(
Quad(weightsCalc="lagrange", cheb={"nNodes":n}))
# Iterated trazia
iterTrapez_ = Quad(weightsCalc="lagrange", equi={"nNodes":2})
iterTrapez_.iterWNodes(n)
iterTrapez.append(iterTrapez_)
In [4]:
for testF in [expFuncHalf, rungeHalf]:
yNCotes, yChebI, yITrapez = [], [], []
currentF, currentFV = testF[1] # unzips the function and its value
for n in range(NMin,NMax):
yNCotes.append(NCotes[n-NMin].I(currentF))
yChebI.append(chebInterp[n-NMin].I(currentF))
yITrapez.append(iterTrapez[n-NMin].I(currentF))
plt.title("N. of nodes vs quadrature error for "+str(testF[0])+"""
Red: Newton-Cotes
Green: Chebyshev nodes interpolation
Blue: Iterated trapezia""")
plt.semilogy(range(NMin,NMax), abs(np.array(yNCotes)-currentFV), 'r')
plt.semilogy(range(NMin,NMax), abs(np.array(yChebI)-currentFV), 'g')
plt.semilogy(range(NMin,NMax), abs(np.array(yITrapez)-currentFV), 'b')
plt.show()
In [5]:
def legendreP(N):
"Returns the N-th Legendre polynomial."
if N==0:
return np.poly1d([1])
if N==1:
return np.poly1d([1,0])
if N>1:
return (((2*N+1) * np.poly1d([1,0])
* legendreP(N-1)
-N*legendreP(N-2))/(N+1))
In [8]:
approxGauss, gauss, chebInterp = [], [], []
for n in range(NMin,NMax):
# an approximation of Legendre-Gauss weights
legRoots = np.roots((Quad.legendreP)(n))
approxGauss.append(Quad(wnodes=legRoots,
a=-1, b=1, weightsCalc="lagrange"))
# Chebyshev nodes
chebInterp.append(Quad(a=-1, b=1, weightsCalc="lagrange",
cheb={"nNodes":n}))
#Legendre-Gauss weights from stock functions
gauss.append(Quad(wnodes=leggauss(n)[0], a=-1, b=1,
weightsCalc="lagrange", weights=leggauss(n)[1]))
In [9]:
meanError, maxError = [], []
for n in range(NMin,NMax):
# some estimates on the errors in our weights approximations
meanError.append(np.mean(
abs(np.array(approxGauss[n-NMin].getWList()) - leggauss(n)[1])))
maxError.append(max(
abs(np.array(approxGauss[n-NMin].getWList()) - leggauss(n)[1])))
# plots the errors between our approximate Gauss weights and stock ones:
plt.title("""N. of nodes vs weights errors between approx. and stock Gauss
Red: maximum error
Blue: mean error""")
plt.plot(range(NMin,NMax), meanError, 'b')
plt.plot(range(NMin,NMax), maxError, 'r')
plt.show()
In [10]:
expFunc = ["exp(-x) in [-1,1]",
(lambda x : np.exp(-x), 2*np.sinh(1))]
runge = ["Runge function in [-1,1]",
(lambda x : 1/(1 + 25*x**2), 2*np.arctan(5)/5)]
for testF in [expFunc, runge]:
currentF, currentFV = testF[1] # unzips the function and its value
yChebI, yGauss = [], []
for n in range(NMin,NMax):
yChebI.append(chebInterp[n-NMin].I(currentF))
yGauss.append(gauss[n-NMin].I(currentF))
# plots of the errors for quadrature, Chebyshev vs Gauss rules
plt.title("N. of nodes vs quadrature error for "+str(testF[0])+"""
Red: error with Chebyshev wnodes
Blue: error with Gauss wnodes""")
plt.semilogy(range(NMin,NMax), abs(np.array(yChebI)-currentFV), 'r')
plt.semilogy(range(NMin,NMax), abs(np.array(yGauss)-currentFV), 'b')
plt.show()
Notice that in the first case, Gauss method converges much faster and at a more constant rate than Chebyshev, while in the second case behaves nearly identically as Chebyshev.
It is true that Gauss and Chebyshev weights are always positive.
A proof$^*$ of this fact for Gauss weights can be directly obtained with a trick from the fact that for polynomials up to order $2n-1$ the quadrature formula is exact. In fact, letting $q_i$ be Legendre-Gauss roots, fix an index $k$ and consider the polynomial
$$ p(x) = \prod_{i \neq k} (x-q_i).$$$p^2$ is non negative and has degree $2n-2$, therefore
$$ \int_{-1}^{1} p^2(x) = \sum_s p^2(q_s) w_s = \sum_s \prod_{i \neq k} (q_s-q_i)^2 w_s = \prod_{i \neq k} (q_k-q_i)^2 w_k, $$and since we have squares on both sides, in conclusion $w_k \geq 0, \forall k$.
A proof$^{\star}$ for Chebyshev nodes instead requires some computations to be performed. By some properties of Chebyshev polynomials we can obtain the following expression for the weights (with $2n$ nodes):
$$ w_s = \dfrac{1}{n} \left( 1 + \dfrac{(-1)^s}{1 - 4n^2}+ \sum_{j=1}^{n-1} \dfrac{2}{1 - 4j^2} \cos \left( \dfrac{j \pi s}{n} \right) \right).$$And since it holds $$ \dfrac{1}{1 - 4j^2} = \dfrac{1}{1-2j} + \dfrac{1}{1 + 2j}, $$ we have that $$ \left| \dfrac{(-1)^s}{1 - 4n^2}+ \sum_{j=1}^{n-1} \dfrac{2}{1 - 4j^2} \cos \left( \dfrac{j \pi s}{n} \right) \right| \leq \sum_{j=1}^{n} \dfrac{1}{1-2j} + \dfrac{1}{1 + 2j} \leq \dfrac{2n}{4n^2-1} < 1, $$ so in conclusion $$w_s \geq 0.$$
$[*]$: see http://en.wikipedia.org/wiki/Gaussian_quadrature#Proof_that_the_weights_are_positive
$[\star]$ see http://en.wikipedia.org/wiki/Clenshaw–Curtis_quadrature#Precomputing_the_quadrature_weights and http://link.springer.com/article/10.1007%2FBF01385885
In [11]:
def posit(x):
"Returns the positive part of $x"
if x < 0: return 0
if x >=0: return x
In [12]:
def peanoArg(k,x,t):
"Returns the $k-th power of the positive part of ($x-$t)."
return posit(x-t)**k
In [ ]:
## from Quad, not executable.
def peanoInt(k,t,a,b):
"Returns the integral from a to b of the k-th Peano argument function."
if a<=b:
if t>=b:
return 0
else:
return (b-t)**(k+1) /(k+1)
else:
return -Quad.peanoInt(k,t,b,a)
In [ ]:
## from Quad, not executable.
def peanoKer(self, k, t):
"Returns the k-th Peano kernel on [$a, $b] evaluated in t."
peanoQuad = sum(Quad.peanoArg(k,node,t)*weight for
node,weight in self.wnodes.items())
return (Quad.peanoInt(k,t,self.a,self.b) - peanoQuad)
Notice that:
In [13]:
NMin, NMax = 1,1
pltGrid = np.linspace(0,1,4097)
# Midpoint rule kernel
ONC = Quad(newtonCotes={"degree":2, "type":"open"},
weightsCalc="lagrange")
plt.title("Order 1 Peano kernel\nMidpoint rule")
plt.plot(pltGrid, [ONC.peanoKer(1,x) for x in pltGrid])
plt.show()
# Trapezoid rule kernel
CNC = Quad(newtonCotes={"degree":1, "type":"closed"},
weightsCalc="lagrange")
plt.title("Order 1 Peano kernel\nTrapezoid rule")
plt.plot(pltGrid, [CNC.peanoKer(1,x) for x in pltGrid])
plt.show()
# Simpson's rule kernel
CNC = Quad(newtonCotes={"degree":2, "type":"closed"},
weightsCalc="lagrange")
plt.title("Order 1 Peano kernel\nSimpson's rule")
plt.plot(pltGrid, [CNC.peanoKer(1,x) for x in pltGrid])
plt.show()
In [14]:
for n in range(1,16,3):
CNC = Quad(newtonCotes={"degree":n, "type":"closed"},
weightsCalc="lagrange")
plt.title("Order "+str(n)+" Peano kernel\n"+
"Closed Newton-Cotes of degree "+str(n))
plt.plot(pltGrid, [CNC.peanoKer(n,x) for x in pltGrid])
plt.show()
for n in range(1,16,3):
CNC = Quad(newtonCotes={"degree":n, "type":"closed"},
weightsCalc="lagrange")
plt.title("Order "+str(n+1)+" Peano kernel\n"+
"Closed Newton-Cotes of degree "+str(n))
plt.plot(pltGrid, [CNC.peanoKer(n+1,x) for x in pltGrid])
plt.show()
for n in [1,2,4,8,16,32,64]:
Gauss = Quad(wnodes=leggauss(n)[0], weights=leggauss(n)[1])
plt.title("Order 1 Peano kernel\n"+str(2*n+1)+" Gauss nodes")
plt.plot(pltGrid, [Gauss.peanoKer(1,x) for x in pltGrid])
plt.show()