In [1]:
using Plots,LaTeXStrings
default(markersize=3,linewidth=1.5)
using LinearAlgebra,DifferentialEquations
using DataFrames
include("FNC.jl");
We'll examine the guessing/IVP approach for the case $\lambda=0.6$.
In [2]:
lambda = 0.6
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;
We have to avoid $r=0$, or we would divide by zero.
In [3]:
a = eps(); b = 1;
We convert the ODE to a first-order system in $v_1$, $v_2$.
In [4]:
f = (v,p,r) -> [ v[2]; phi(r,v[1],v[2]) ];
Now we try multiple guesses for the unknown $w(0)$ and plot the solutions.
In [5]:
for w0 = 0.4:0.1:0.9
IVP = ODEProblem(f,[w0,0],(a,b))
sol = solve(IVP)
plot!(sol,vars=[1],label="w0 = $w0")
end
plot!([],[],label="",
xaxis=(L"x"), yaxis=(L"w(x)"),
title="Solutions for choices of w(0)",leg=:bottomright)
Out[5]:
On the graph, it's the curve starting at $w(0)=0.8$ that comes closest to the required condition $w(1)=1$.
We eliminate the guesswork for $w(0)$ and let shoot
do the work for us.
In [6]:
lambda = 0.6
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;
a = eps(); b = 1;
We specify the given and unknown endpoint values.
In [7]:
lval = []; lder = 0; # w(a)=?, w'(a)=0
rval = 1; rder = []; # w(b)=1, w'(b)=?
In [8]:
r,w,dwdx = FNC.shoot(phi,(a,b),lval,lder,rval,rder,0.8)
plot(r,w,label="",
title="Correct solution",xaxis=(L"x"), yaxis=(L"w(x)"))
Out[8]:
The value of $w$ at $r=1$, meant to be exactly one, was computed to be
In [9]:
@show w[end];
The accuracy is consistent with the error tolerance used for the IVP solution by shoot
. The initial value $w(0)$ that gave this solution is
In [10]:
@show w[1];
We compute shooting solutions for several values of $\lambda$.
In [11]:
lambda = (6:4:18)
for lam in lambda
lval = -1.0; rval = 0.0;
phi = (x,u,dudx) -> lam^2*u + lam^2
x,u = FNC.shoot(phi,(0.0,1.0),lval,[],rval,[],0.0)
plot!(x,u,m=:o,label=L"\lambda=$lam")
end
plot!([],[],label="",
xaxis=(L"x"),yaxis=([-2,0.5],L"u(x)"),
title="Shooting instability",leg=:topleft)
Out[11]:
The solutions are inaccurate near the right endpoint as $\lambda$ increases.
We test first-order and second-order differentiation matrices for the function $x + e^{\sin 4x}$ over $[-1,1]$.
In [12]:
f = x -> x + exp(sin(4*x));
For reference, here are the exact first and second derivatives.
In [13]:
dfdx = x -> 1 + 4*exp(sin(4*x))*cos(4*x);
d2fdx2 = x -> 4*exp(sin(4*x))*(4*cos(4*x)^2-4*sin(4*x));
We discretize on equally spaced nodes and evaluate $f$ at the nodes.
In [14]:
t,Dx,Dxx = FNC.diffmat2(12,[-1,1])
y = f.(t);
Then the first two derivatives of $f$ each require one matrix-vector multiplication.
In [15]:
yx = Dx*y;
yxx = Dxx*y;
The results are fair but not very accurate for this small value of $n$.
In [16]:
plot(dfdx,-1,1,layout=2,subplot=1,
xaxis=(L"x"),yaxis=(L"f'(x)"),leg=:none)
scatter!(t,yx,subplot=1)
plot!(d2fdx2,-1,1,subplot=2,
xaxis=(L"x"),yaxis=(L"f''(x)"))
scatter!(t,yxx,subplot=2,leg=:none)
Out[16]:
An experiment confirms the order of accuracy.
In [17]:
n = @. round(Int,2^(4:.5:11) )
err1 = zeros(size(n))
err2 = zeros(size(n))
for (k,n) = enumerate(n)
t,Dx,Dxx = FNC.diffmat2(n,[-1,1])
y = f.(t)
err1[k] = norm( dfdx.(t) - Dx*y, Inf )
err2[k] = norm( d2fdx2.(t) - Dxx*y, Inf )
end
DataFrame(n=n,error_first=err1,error_second=err2)
Out[17]:
For $O(n^{-p})$ convergence, we use a log-log plot of the errors.
In [18]:
plot(n,[err1 err2],m=:o,label=[L"f'" L"f''"])
plot!(n,10*10*n.^(-2),l=:dash,label="2nd order",
xaxis=(:log10,"n"), yaxis=(:log10,"max error"),
title="Convergence of finite differences")
Out[18]:
Here is a $4\times 4$ Chebyshev differentiation matrix.
In [19]:
t,Dx = FNC.diffcheb(3,[-1,1])
Dx
Out[19]:
We again test the convergence rate.
In [20]:
f = x -> x + exp(sin(4*x));
dfdx = x -> 1 + 4*exp(sin(4*x))*cos(4*x);
d2fdx2 = x -> 4*exp(sin(4*x))*(4*cos(4*x)^2-4*sin(4*x));
n = 5:5:70
err1 = zeros(size(n))
err2 = zeros(size(n))
for (k,n) = enumerate(n)
t,Dx,Dxx = FNC.diffcheb(n,[-1,1])
y = f.(t)
err1[k] = norm( dfdx.(t) - Dx*y, Inf )
err2[k] = norm( d2fdx2.(t) - Dxx*y, Inf )
end
DataFrame(n=n,error_first=err1,error_second=err2)
Out[20]:
In [21]:
plot(n,[err1 err2],m=:o,label=[L"f'" L"f''"],
xaxis=(L"n"), yaxis=(:log10,"max error"),
title="Convergence of Chebyshev derivatives")
Out[21]:
Notice that the graph has a log-linear scale, not log-log. Hence the convergence is exponential, as we expect for a spectral method on a smooth function.
Consider the linear BVP $$ u'' - (\cos x) u' + (\sin x) u = 0, \quad 0 \le x \le \pi/2, \quad u(0)=1, \; u(\pi/2)=e. $$ Its exact solution is simple.
In [22]:
exact = x -> exp(sin(x));
The problem is presented in our standard form, so we can identify the coefficient functions in the ODE. Each should be coded so that they can accept either scalar or vector inputs.
In [23]:
p = x -> -cos(x);
q = sin;
r = x -> 0; # function, not value
We solve the BVP and compare the result to the exact solution.
In [24]:
x,u = FNC.bvplin(p,q,r,[0,pi/2],1,exp(1),25);
In [25]:
plot(x,u,layout=(2,1),subplot=1,yaxis=("solution"),
title="Solution of the BVP",leg=:none)
plot!(x,exact.(x)-u,m=:o,subplot=2,yaxis=("error"),leg=:none)
Out[25]:
In [26]:
lambda = 10;
exact = x -> sinh(lambda*x)/sinh(lambda) - 1;
These functions define the ODE.
In [27]:
p = x -> 0;
q = x -> -lambda^2;
r = x -> lambda^2;
We compare the computed solution to the exact one for increasing $n$.
In [28]:
n = [32,64,128,256,512]
err = zeros(size(n))
for (k,n) = enumerate(n)
x,u = FNC.bvplin(p,q,r,[0,1],-1,0,n)
err[k] = norm(exact.(x)-u,Inf)
end
DataFrame(n=n,error=err)
Out[28]:
Each time $n$ is doubled, the error is reduced by a factor very close to 4, which is indicative of second order convergence.
In [29]:
plot(n,err,m=:o,label="observed",
xaxis=(:log10,L"n"), yaxis=(:log10,"max error"),
title="Convergence of finite differences")
plot!(n,n.^(-2),l=:dash,label="2nd order")
Out[29]:
In [30]:
phi = (t,theta,omega) -> -0.05*omega - sin(theta)
init = LinRange(2.5,-2,101)
t,theta = FNC.bvp(phi,[0,5],2.5,[],-2,[],init)
plot(t,theta,xaxis=(L"t"),yaxis=(L"\theta(t)"),
title="Pendulum over [0,5]",leg=:none)
Out[30]:
Note how the angle decreases throughout the motion until near the end. If we extend the time interval longer, then we find a rather different solution.
In [31]:
t,theta = FNC.bvp(phi,[0,8],2.5,[],-2,[],init);
plot(t,theta,xaxis=(L"t"),yaxis=(L"\theta(t)"),
title="Pendulum over [0,8]",leg=:none)
Out[31]:
This time the angle initially increases, and the pendulum hovers near the vertical position before swinging back down.
In [32]:
lambda = 0.5
phi = (r,w,dwdr) -> lambda/w^2 - dwdr/r;
init = ones(301)
r,w1 = FNC.bvp(phi,[0,1],[],0,1,[],init)
plot(r,w1,xaxis=(L"r"),yaxis=(L"w(r)"),title="Solution",leg=:none)
Out[32]:
By choosing a different initial guess, we arrive at another solution.
In [33]:
init = 0.5*ones(301)
r,w2 = FNC.bvp(phi,[0,1],[],0,1,[],init)
plot!(r,w2)
Out[33]:
Finding a solution is easy at some values of $\epsilon$.
In [34]:
epsilon = 0.05;
phi = (x,u,dudx) -> (u^3 - u) / epsilon;
init = LinRange(-1,1,141)
x,u1 = FNC.bvp(phi,[0,1],-1,[],1,[],init)
plot(x,u1,label=L"\epsilon = 0.05",
xaxis=(L"x"),yaxis=(L"u(x)"),title="Allen-Cahn solution")
Out[34]:
However, finding a good starting guess is not trivial for smaller values of $\epsilon$. Note below that the iteration stops without converging to a solution.
In [35]:
epsilon = 0.005;
x,u = FNC.bvp(phi,[0,1],-1,[],1,[],init);
A simple way around this problem is to use the result of a solved version as the starting guess for a more difficult version.
In [36]:
x,u2 = FNC.bvp(phi,[0,1],-1,[],1,[],u1)
plot!(x,u2,label=L"\epsilon = 0.005")
Out[36]:
In this case we can continue further.
In [37]:
epsilon = 0.0005
x,u3 = FNC.bvp(phi,[0,1],-1,[],1,[],u2)
plot!(x,u3,label=L"\epsilon = 0.0005")
Out[37]:
Here are the coefficient functions.
In [38]:
c = x -> x^2;
q = x -> 4;
f = x -> sin(pi*x);
In [39]:
x,u = FNC.fem(c,q,f,0,1,50)
plot(x,u,label="",
xaxis=(L"x"),yaxis=(L"u"),title="Solution by finite elements")
Out[39]: