In [1]:
using Plots,LaTeXStrings
default(markersize=3,linewidth=1.5)
using LinearAlgebra,DifferentialEquations
using DataFrames
include("FNC.jl");

Example 10.1.2

We'll examine the guessing/IVP approach for the case $\lambda=0.6$.


In [2]:
lambda = 0.6
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;

We have to avoid $r=0$, or we would divide by zero.


In [3]:
a = eps();  b = 1;

We convert the ODE to a first-order system in $v_1$, $v_2$.


In [4]:
f = (v,p,r) -> [ v[2]; phi(r,v[1],v[2]) ];

Now we try multiple guesses for the unknown $w(0)$ and plot the solutions.


In [5]:
for w0 = 0.4:0.1:0.9
    IVP = ODEProblem(f,[w0,0],(a,b))
    sol = solve(IVP)
    plot!(sol,vars=[1],label="w0 = $w0")
end
plot!([],[],label="",
    xaxis=(L"x"), yaxis=(L"w(x)"),   
    title="Solutions for choices of w(0)",leg=:bottomright)


Out[5]:
0.2 0.4 0.6 0.8 1.0 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Solutions for choices of w(0) w0 = 0.4 w0 = 0.5 w0 = 0.6 w0 = 0.7 w0 = 0.8 w0 = 0.9

On the graph, it's the curve starting at $w(0)=0.8$ that comes closest to the required condition $w(1)=1$.

Example 10.1.3

We eliminate the guesswork for $w(0)$ and let shoot do the work for us.


In [6]:
lambda = 0.6
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;
a = eps();  b = 1;

We specify the given and unknown endpoint values.


In [7]:
lval = [];  lder = 0;   # w(a)=?, w'(a)=0
rval = 1;   rder = [];  # w(b)=1, w'(b)=?

In [8]:
r,w,dwdx = FNC.shoot(phi,(a,b),lval,lder,rval,rder,0.8)
plot(r,w,label="",
    title="Correct solution",xaxis=(L"x"), yaxis=(L"w(x)"))


Out[8]:
0.00 0.25 0.50 0.75 1.00 0.80 0.85 0.90 0.95 1.00 Correct solution

The value of $w$ at $r=1$, meant to be exactly one, was computed to be


In [9]:
@show w[end];


w[end] = 0.9999966610805521

The accuracy is consistent with the error tolerance used for the IVP solution by shoot. The initial value $w(0)$ that gave this solution is


In [10]:
@show w[1];


w[1] = 0.7877517010083118

Example 10.1.4

We compute shooting solutions for several values of $\lambda$.


In [11]:
lambda = (6:4:18)
for lam in lambda
    lval = -1.0;  rval = 0.0;
    phi = (x,u,dudx) -> lam^2*u + lam^2
    x,u = FNC.shoot(phi,(0.0,1.0),lval,[],rval,[],0.0)
    plot!(x,u,m=:o,label=L"\lambda=$lam")
end
plot!([],[],label="",
    xaxis=(L"x"),yaxis=([-2,0.5],L"u(x)"),
    title="Shooting instability",leg=:topleft)


Out[11]:
0.00 0.25 0.50 0.75 1.00 -2.0 -1.5 -1.0 -0.5 0.0 0.5 Shooting instability λ =$lam λ =$lam λ =$lam λ =$lam

The solutions are inaccurate near the right endpoint as $\lambda$ increases.

Example 10.2.1

We test first-order and second-order differentiation matrices for the function $x + e^{\sin 4x}$ over $[-1,1]$.


In [12]:
f = x -> x + exp(sin(4*x));

For reference, here are the exact first and second derivatives.


In [13]:
dfdx = x -> 1 + 4*exp(sin(4*x))*cos(4*x);
d2fdx2 = x -> 4*exp(sin(4*x))*(4*cos(4*x)^2-4*sin(4*x));

We discretize on equally spaced nodes and evaluate $f$ at the nodes.


In [14]:
t,Dx,Dxx = FNC.diffmat2(12,[-1,1])
y = f.(t);

Then the first two derivatives of $f$ each require one matrix-vector multiplication.


In [15]:
yx = Dx*y;
yxx = Dxx*y;

The results are fair but not very accurate for this small value of $n$.


In [16]:
plot(dfdx,-1,1,layout=2,subplot=1,
    xaxis=(L"x"),yaxis=(L"f'(x)"),leg=:none)
scatter!(t,yx,subplot=1)
plot!(d2fdx2,-1,1,subplot=2,
    xaxis=(L"x"),yaxis=(L"f''(x)"))
scatter!(t,yxx,subplot=2,leg=:none)


Out[16]:
-1.0 -0.5 0.0 0.5 1.0 -5.0 -2.5 0.0 2.5 5.0 -1.0 -0.5 0.0 0.5 1.0 -40 -30 -20 -10 0 10 20

An experiment confirms the order of accuracy.


In [17]:
n = @. round(Int,2^(4:.5:11) )
err1 = zeros(size(n))
err2 = zeros(size(n))
for (k,n) = enumerate(n)
    t,Dx,Dxx = FNC.diffmat2(n,[-1,1])
    y = f.(t)
    err1[k] = norm( dfdx.(t) - Dx*y, Inf )
    err2[k] = norm( d2fdx2.(t) - Dxx*y, Inf )
end
DataFrame(n=n,error_first=err1,error_second=err2)


Out[17]:

15 rows × 3 columns

nerror_firsterror_second
Int64Float64Float64
1160.75523516.7359
2230.4622129.08896
3320.2704394.63461
4450.1476252.16176
5640.07647380.951138
6910.0388940.415558
71280.0199870.187604
81810.01010260.084895
92560.005084980.0390723
103620.002554520.0182999
115120.001280840.00869686
127240.0006418680.00418688
1310240.0003213160.00203477
1414480.0001608480.000996844
1520488.0461e-50.000490924

For $O(n^{-p})$ convergence, we use a log-log plot of the errors.


In [18]:
plot(n,[err1 err2],m=:o,label=[L"f'" L"f''"])
plot!(n,10*10*n.^(-2),l=:dash,label="2nd order",
    xaxis=(:log10,"n"), yaxis=(:log10,"max error"),
    title="Convergence of finite differences")


Out[18]:
10 1.5 10 2.0 10 2.5 10 3.0 10 - 4 10 - 3 10 - 2 10 - 1 10 0 10 1 Convergence of finite differences n max error 2nd order

Example 10.2.2

Here is a $4\times 4$ Chebyshev differentiation matrix.


In [19]:
t,Dx = FNC.diffcheb(3,[-1,1])
Dx


Out[19]:
4×4 Array{Float64,2}:
 -3.16667    4.0       -1.33333    0.5     
 -1.0        0.333333   1.0       -0.333333
  0.333333  -1.0       -0.333333   1.0     
 -0.5        1.33333   -4.0        3.16667 

We again test the convergence rate.


In [20]:
f = x -> x + exp(sin(4*x));
dfdx = x -> 1 + 4*exp(sin(4*x))*cos(4*x);
d2fdx2 = x -> 4*exp(sin(4*x))*(4*cos(4*x)^2-4*sin(4*x));

n = 5:5:70
err1 = zeros(size(n))
err2 = zeros(size(n))
for (k,n) = enumerate(n)
    t,Dx,Dxx = FNC.diffcheb(n,[-1,1])
    y = f.(t)
    err1[k] = norm( dfdx.(t) - Dx*y, Inf )
    err2[k] = norm( d2fdx2.(t) - Dxx*y, Inf )
end
DataFrame(n=n,error_first=err1,error_second=err2)


Out[20]:

14 rows × 3 columns

nerror_firsterror_second
Int64Float64Float64
152.7983646.4779
2100.74850234.6393
3150.087010810.5477
4200.006572711.59913
5250.0003871740.160919
6302.01505e-50.012071
7358.79023e-70.000717109
8403.27907e-83.4952e-5
9451.05804e-91.42753e-6
10503.02594e-115.02883e-8
11551.15552e-122.14504e-9
12608.41993e-139.71344e-10
13652.56684e-139.4769e-10
14704.73399e-131.11094e-9

In [21]:
plot(n,[err1 err2],m=:o,label=[L"f'" L"f''"],
    xaxis=(L"n"), yaxis=(:log10,"max error"),
    title="Convergence of Chebyshev derivatives")


Out[21]:
10 20 30 40 50 60 70 10 - 12 10 - 9 10 - 6 10 - 3 10 0 Convergence of Chebyshev derivatives max error

Notice that the graph has a log-linear scale, not log-log. Hence the convergence is exponential, as we expect for a spectral method on a smooth function.

Example 10.3.1

Consider the linear BVP $$ u'' - (\cos x) u' + (\sin x) u = 0, \quad 0 \le x \le \pi/2, \quad u(0)=1, \; u(\pi/2)=e. $$ Its exact solution is simple.


In [22]:
exact = x -> exp(sin(x));

The problem is presented in our standard form, so we can identify the coefficient functions in the ODE. Each should be coded so that they can accept either scalar or vector inputs.


In [23]:
p = x -> -cos(x);
q = sin;
r = x -> 0;      # function, not value

We solve the BVP and compare the result to the exact solution.


In [24]:
x,u = FNC.bvplin(p,q,r,[0,pi/2],1,exp(1),25);

In [25]:
plot(x,u,layout=(2,1),subplot=1,yaxis=("solution"),
    title="Solution of the BVP",leg=:none)

plot!(x,exact.(x)-u,m=:o,subplot=2,yaxis=("error"),leg=:none)


Out[25]:
0.0 0.5 1.0 1.5 1.0 1.5 2.0 2.5 Solution of the BVP solution 0.0 0.5 1.0 1.5 -0.0003 -0.0002 -0.0001 0.0000 error

Example 10.3.2


In [26]:
lambda = 10;
exact = x -> sinh(lambda*x)/sinh(lambda) - 1;

These functions define the ODE.


In [27]:
p = x -> 0;
q = x -> -lambda^2;
r = x -> lambda^2;

We compare the computed solution to the exact one for increasing $n$.


In [28]:
n = [32,64,128,256,512]
err = zeros(size(n))
for (k,n) = enumerate(n)
    x,u = FNC.bvplin(p,q,r,[0,1],-1,0,n)
    
    err[k] = norm(exact.(x)-u,Inf)
end
DataFrame(n=n,error=err)


Out[28]:

5 rows × 2 columns

nerror
Int64Float64
1320.00148047
2640.000372619
31289.34931e-5
42562.33831e-5
55125.84704e-6

Each time $n$ is doubled, the error is reduced by a factor very close to 4, which is indicative of second order convergence.


In [29]:
plot(n,err,m=:o,label="observed",
    xaxis=(:log10,L"n"), yaxis=(:log10,"max error"),
    title="Convergence of finite differences") 
plot!(n,n.^(-2),l=:dash,label="2nd order")


Out[29]:
10 1.50 10 1.75 10 2.00 10 2.25 10 2.50 10 - 5.0 10 - 4.5 10 - 4.0 10 - 3.5 10 - 3.0 Convergence of finite differences max error observed 2nd order

Example 10.4.2


In [30]:
phi = (t,theta,omega) -> -0.05*omega - sin(theta)
init = LinRange(2.5,-2,101)

t,theta = FNC.bvp(phi,[0,5],2.5,[],-2,[],init)
plot(t,theta,xaxis=(L"t"),yaxis=(L"\theta(t)"),
    title="Pendulum over [0,5]",leg=:none)


Out[30]:
0 1 2 3 4 5 -2 -1 0 1 2 Pendulum over [0,5]

Note how the angle decreases throughout the motion until near the end. If we extend the time interval longer, then we find a rather different solution.


In [31]:
t,theta = FNC.bvp(phi,[0,8],2.5,[],-2,[],init);
plot(t,theta,xaxis=(L"t"),yaxis=(L"\theta(t)"),
    title="Pendulum over [0,8]",leg=:none)


Out[31]:
0 2 4 6 8 -2 -1 0 1 2 3 Pendulum over [0,8]

This time the angle initially increases, and the pendulum hovers near the vertical position before swinging back down.

Example 10.4.3


In [32]:
lambda = 0.5
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;

init = ones(301)
r,w1 = FNC.bvp(phi,[0,1],[],0,1,[],init)
plot(r,w1,xaxis=(L"r"),yaxis=(L"w(r)"),title="Solution",leg=:none)


Out[32]:
0.00 0.25 0.50 0.75 1.00 0.85 0.90 0.95 1.00 Solution

By choosing a different initial guess, we arrive at another solution.


In [33]:
init = 0.5*ones(301)
r,w2 = FNC.bvp(phi,[0,1],[],0,1,[],init)
plot!(r,w2)


Out[33]:
0.00 0.25 0.50 0.75 1.00 0.2 0.4 0.6 0.8 1.0 Solution

Example 10.4.4

Finding a solution is easy at some values of $\epsilon$.


In [34]:
epsilon = 0.05;
phi = (x,u,dudx) -> (u^3 - u) / epsilon;

init = LinRange(-1,1,141)
x,u1 = FNC.bvp(phi,[0,1],-1,[],1,[],init)

plot(x,u1,label=L"\epsilon = 0.05",
    xaxis=(L"x"),yaxis=(L"u(x)"),title="Allen-Cahn solution")


Out[34]:
0.00 0.25 0.50 0.75 1.00 -1.0 -0.5 0.0 0.5 1.0 Allen-Cahn solution

However, finding a good starting guess is not trivial for smaller values of $\epsilon$. Note below that the iteration stops without converging to a solution.


In [35]:
epsilon = 0.005;
x,u = FNC.bvp(phi,[0,1],-1,[],1,[],init);


┌ Warning: Iteration did not find a root.
└ @ Main.FNC /Users/driscoll/Dropbox/books/fnc-extras/julia/functions/chapter04.jl:164

A simple way around this problem is to use the result of a solved version as the starting guess for a more difficult version.


In [36]:
x,u2 = FNC.bvp(phi,[0,1],-1,[],1,[],u1)
plot!(x,u2,label=L"\epsilon = 0.005")


Out[36]:
0.00 0.25 0.50 0.75 1.00 -1.0 -0.5 0.0 0.5 1.0 Allen-Cahn solution

In this case we can continue further.


In [37]:
epsilon = 0.0005
x,u3 = FNC.bvp(phi,[0,1],-1,[],1,[],u2)
plot!(x,u3,label=L"\epsilon = 0.0005")


Out[37]:
0.00 0.25 0.50 0.75 1.00 -1.0 -0.5 0.0 0.5 1.0 Allen-Cahn solution

Example 10.5.2

Here are the coefficient functions.


In [38]:
c = x -> x^2;
q = x -> 4;
f = x -> sin(pi*x);

In [39]:
x,u = FNC.fem(c,q,f,0,1,50)
plot(x,u,label="",
    xaxis=(L"x"),yaxis=(L"u"),title="Solution by finite elements")


Out[39]:
0.00 0.25 0.50 0.75 1.00 0.00 0.05 0.10 0.15 Solution by finite elements