A paper by Jones and Underwood suggests a model for the temperature behaviour $T(t)$ of a PV cell in terms of a nonlinear differential equation. Here we extract the key features as
\begin{equation} \frac{\text{d}T}{\text{d}t} = f(t, T) = c_{1} \left( c_{2} T_{\text{ambient}}^4 - T^4 \right) + c_{3} - \frac{c_4}{T} - c_5 ( T - T_{\text{ambient}} ), \end{equation}where the various $c_{1, \dots, 5}$ are constant parameters, and the cell is assumed to relax back to the ambient temperature fast enough to treat $T_{\text{ambient}}$ as a constant as well.
If we're given the values of the parameters together with a temperature value at time $t=0$, we can solve this initial value problem numerically.
We've solved lots of problems by integration already. A good scientist is a lazy scientist, so we can try to solve this one by integration as well.
Assume we know the solution at $t_j$ and want to compute the solution at $t_{j+1} = t_j + \Delta t$. We write
\begin{equation} \int_{t_j}^{t_{j+1}} \text{d}t \, \frac{\text{d}T}{\text{d}t} = T \left( t_{j+1} \right) - T \left( t_{j} \right). \end{equation}Using the differential equation we therefore get
\begin{equation} T \left( t_{j+1} \right) = T \left( t_{j} \right) + \int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T). \end{equation}If we can solve the integral, we can move from the solution at $t_j$ to the solution at $t_{j+1}$.
The simplest solution of the integral was the Riemann integral approximation. The width of the interval is $t_{j+1} - t_j = \Delta t$. We know the value of $T(t_j)$. Therefore we can approximate
\begin{equation} \int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T) \approx \Delta t \, \, f \left( t, T(t_j) \right), \end{equation}leading to Euler's method
\begin{equation} T \left( t_{j+1} \right) = T \left( t_{j} \right) + \Delta t \, \, f \left( t_j, T(t_j) \right), \end{equation}which in more compact notation is
\begin{equation} T_{j+1} = T_{j} + \Delta t \, \, f_j. \end{equation}Let's implement this where the ambient temperature is $290$K, the $c$ parameters are
\begin{align} c_1 &= 10^{-5} \\ c_2 &= 0.9 \\ c_3 &= 0 \\ c_4 &= 10^{-2} \\ c_5 &= 1 \end{align}and $T(0) = 300$K. We'll solve up to $t=10^{-2}$ hours (it relaxes very fast!).
Note: we're going to pass in all the parameter values using a Python dictionary. These are a little like lists - they hold multiple things. However, the index is not an integer, but something constant - the key - that you specify. They're defined using curly braces {}
, with the key followed by a colon and then the value.
In [ ]:
from __future__ import division
import numpy
%matplotlib notebook
from matplotlib import pyplot
In [ ]:
parameters = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0}
T_initial = 300.0
t_end = 1e-2
In [ ]:
def f(t, T, parameters):
T_ambient = parameters["T_ambient"]
c1 = parameters["c1"]
c2 = parameters["c2"]
c3 = parameters["c3"]
c4 = parameters["c4"]
c5 = parameters["c5"]
return c1 * (c2 * T_ambient**4 - T**4) + c3 - c4 / T - c5 * (T - T_ambient)
In [ ]:
In [ ]:
In [ ]:
As with all integration problems, we expect accuracy (and computation time!) to increase as we increase the number of steps. Euler's method, like the Riemann integral on which it's built, is first order.
A different way of thinking about Euler's method shows explicitly that it's first order. Take the original differential equation
\begin{equation} \frac{\text{d}T}{\text{d}t} = f(t, T). \end{equation}We can directly replace the derivative by using finite differencing. By using Taylor expansion we have
\begin{align} T \left( t_{j+1} \right) &= T \left( t_j \right) + \left( t_{j+1} - t_{j} \right) \left. \frac{\text{d}T}{\text{d}t} \right|_{t = t_{j}} + \frac{\left( t_{j+1} - t_{j} \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|_{t = t_{j}} + \dots \\ &= T \left( t_j \right) + \Delta t \, \left. \frac{\text{d}T}{\text{d}t} \right|_{t = t_{j}} + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|_{t = t_{j}} + \dots \end{align}By re-arranging we get
\begin{equation} \left. \frac{\text{d}T}{\text{d}t} \right|_{t = t_{j}} = \frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|_{t = t_{j}} + \dots \end{equation}This is the forward difference approximation to the first derivative.
By evaluating the original differential equation at $t=t_j$ we get
\begin{equation} \frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|_{t = t_{j}} + \dots = f \left( t_j, T(t_j) \right). \end{equation}This shows that the difference between this approximation from the finite differencing, and the original differential equation, goes as $(\Delta t)^1$ - it is first order. This approximation can be re-arranged to give
\begin{equation} T_{j+1} = T_j + \Delta t \, f_j + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|_{t = t_{j}} + \dots \end{equation}By ignoring the higher order terms, we see that this is just Euler's method again.
We can now imagine how to get higher order methods for IVPs: by constructing a higher order approximation to the derivative. A standard approximation is the central difference approximation
\begin{equation} \frac{\text{d}T}{\text{d}t} = \frac{T(t_{j+1}) - T(t_{j-1})}{2 \Delta t} + {\cal O}\left( (\Delta t)^2 \right), \end{equation}which we will use later with PDEs. However, it isn't so useful for ODEs directly. Instead we see it as a suggestion: combine different differencing approximations to get a better method. Standard Runge-Kutta methods do this by repeatedly constructing approximations to the derivative, which are combined. These combinations are chosen so that the Taylor expansion of the algorithm matches the original equation to higher and higher orders.
A second order Runge-Kutta method is
\begin{align} k_{1} &= \Delta t \, f \left( t_j, T_j \right), \\ k_{2} &= \Delta t \, f \left( t_j + \frac{\Delta t}{2}, T_j + \frac{k_{1}}{2} \right), \\ T_{j+1} &= T_j + k_{2}. \end{align}Let's implement that on our problem above:
In [ ]:
In [ ]:
In [ ]:
The solution looks pretty much identical to that from Euler's method, as this problem is well behaved. In general, the benefits of higher order methods (RK4 is pretty standard) massively outweight the slight additional effort in implementing them.
Of course, a PV cell is not one component with one temperature, but different materials coupled together. Let's assume it's made of three components, as in the Jones and Underwood paper: $T_{(1)}(t)$ is the temperature of the silicon cells, $T_{(2)}(t)$ the temperature of the trilaminate, and $T_{(3)}(t)$ the temperature of the glass face. We can write the temperature behaviour as the system of differential equations
\begin{equation} \frac{\text{d}{\bf T}}{\text{d}t} = {\bf f} \left( t, {\bf T} \right), \quad {\bf T}(0) = {\bf T}_0. \end{equation}Here the vector function ${\bf T}(t) = \left( T_{(1)}(t), T_{(2)}(t), T_{(3)}(t) \right)^T$.
To be concrete let's assume that the silicon behaves as in the single equation model above,
\begin{equation} \frac{\text{d}T_{(1)}}{\text{d}t} = f_{(1)}(t, {\bf T}) = c_{1} \left( c_{2} T_{\text{ambient}}^4 - T_{(1)}^4 \right) + c_{3} - \frac{c_4}{T_{(1)}} - c_5 ( T_{(1)} - T_{\text{ambient}} ), \end{equation}whilst the trilaminate and the glass face try to relax to the temperature of the silicon and the ambient,
\begin{equation} \frac{\text{d}T_{(k)}}{\text{d}t} = f_{(k)}(t, {\bf T}) = - c_5 ( T_{(k)} - T_{\text{ambient}} ) - c_6 ( T_{(k)} - T_{(1)} ), \quad k = 2, 3. \end{equation}We'll use the same parameters as above, and couple the materials using $c_6 = 200$. We'll start the different components at temperatures ${\bf T}_0 = (300, 302, 304)^T$.
The crucial point for numerical methods: nothing conceptually changes. We extend our methods from the scalar to the vector case directly. Where before we had $T(t_j) = T_j$ we now have ${\bf T}(t_j) = {\bf T}_j$, and we can write Euler's method, for example, as
\begin{equation} {\bf T}_{j+1} = {\bf T}_j + \Delta t \, {\bf f} \left( t_j, {\bf T}_j \right). \end{equation}Even better, the code implement needs no alteration:
In [ ]:
parameters_system = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0,
"c6" : 200.0}
T_initial = [300.0, 302.0, 304.0]
t_end = 1e-2
In [ ]:
def f_system(t, T, parameters):
T_ambient = parameters["T_ambient"]
c1 = parameters["c1"]
c2 = parameters["c2"]
c3 = parameters["c3"]
c4 = parameters["c4"]
c5 = parameters["c5"]
c6 = parameters["c6"]
f = numpy.zeros_like(T)
f[0] = c1 * (c2 * T_ambient**4 - T[0]**4) + c3 - c4 / T[0] - c5 * (T[0] - T_ambient)
f[1] = - c5 * (T[1] - T_ambient) - c6 * (T[1] - T[0])
f[2] = - c5 * (T[2] - T_ambient) - c6 * (T[2] - T[0])
return f
In [ ]:
In [ ]:
This is quite a bit more complex: see D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001) for more details.
Let's suppose that there's some fluctuating heat source in the cell that we can't explicitly model. Going back to the single cell case, let's write it as
\begin{equation} \frac{\text{d}T}{\text{d}t} = f(t, T) + g(T) \frac{\text{d}W}{\text{d}t}. \end{equation}Here $W(t)$ is a random, or Brownian, or Wiener process. It's going to represent the random fluctuating heat source that we can't explicitly model: its values will be drawn from a normal distribution with mean zero. The values of the random process can jump effectively instantly, but over a timestep $\Delta t$ will average to zero, with standard deviation $\sqrt{\Delta t}$.
Because of this extreme behaviour, the derivative doesn't really make sense: instead we should use the integral form.
In our integral form we get
\begin{equation} T_{j+1} = T_j + \Delta t \, f_j + \int_{t_j}^{t_{j+1}} \text{d}t \, g(T) \frac{\text{d}W}{\text{d}t}. \end{equation}We approximate this final integral at the left edge $t_j$ as
\begin{equation} \int_{t_j}^{t_{j+1}} \text{d}t \, g(T) \frac{\text{d}W}{\text{d}t} \approx g(T_j) \, \text{d}W_j, \end{equation}where $\text{d}W_j$ is the random process over the interval $[t_j, t_{j+1}]$: this is a random number drawn from a normal distribution with mean zero and standard deviation $\sqrt{\Delta t}$.
This is the Euler-Maruyama method.
Let's take our original single temperature model and add a temperature dependent fluctuation $g(T) = (T - T_{\text{ambient}})^2$.
In [ ]:
from numpy.random import randn
In [ ]:
def g_stochastic(t, T, parameters):
T_ambient = parameters["T_ambient"]
return (T - T_ambient)**2
In [ ]:
parameters = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0}
T_initial = 300.0
t_end = 1e-2
In [ ]:
In [ ]:
In [ ]:
In a fluctuating problem like this, a single simulation doesn't tell you very much. Instead we should perform many simulations and average the result. Let's run this 1000 times:
In [ ]:
In [ ]:
The average behaviour looks much like the differential equation model, but now we can model variability as well.
Read Higham's paper and try applying the Milstein method
\begin{equation} T_{j+1} = T_j + \Delta t \, f_j + g_j \, \text{d}W_j + \frac{1}{2} g_j g'_j \left( \text{d}W_j^2 - \Delta t \right) \end{equation}to the problem above. Here
\begin{equation} g'_j = \left. \frac{\text{d}g(T)}{\text{d}T} \right|_{t=t_j}. \end{equation}In Python, the standard solvers for initial value problems are in the scipy.integrate
library. The easiest to use is scipy.integrate.odeint
, although other solvers offer more control for complex problems.