This jupyter notebook can be found at: https://github.com/environmentalscience/essm/blob/master/docs/examples/examples_numerics.ipynb
Below, we will import variable and equation defintions that were previously exported from api_features.ipynb
by running the file test_equation_definitions.py
:
In [1]:
from IPython.display import display
from sympy import init_printing, latex
init_printing()
from sympy.printing import StrPrinter
StrPrinter._print_Quantity = lambda self, expr: str(expr.abbrev) # displays short units (m instead of meter)
In [2]:
%run -i 'test_equation_definitions.py'
See here for detailed instructions on how to turn sympy expressions into code: https://docs.sympy.org/latest/modules/codegen.html
We will first list all equations defined in this worksheet:
In [3]:
for eq in Equation.__registry__.keys():
print(eq.definition.name + ': ' + str(eq))
In [4]:
def print_dict(vdict, list_vars=None):
"""Print values and units of variables in vdict."""
if not list_vars:
list_vars = vdict.keys()
for var1 in list_vars:
unit1 = var1.definition.unit
if unit1 == 1:
unit1 = ''
if vdict[var1] is not None:
print('{0}: {1} {2}'.format(var1.name, str(vdict[var1]), str(unit1)))
In [5]:
vdict = Variable.__defaults__.copy()
print_dict(vdict)
We can substitute a range of equations into each other by using the custom function subs_eq
:
In [6]:
from essm.variables.utils import subs_eq
subs_eq(eq_Le, [eq_alphaa, eq_Dva])
Out[6]:
We can also use subs_eq to substitute equations into each other and a dictionary with values. We will first add an entry for T_a into the dictionary and then substitute:
In [7]:
vdict[T_a] = 300.
subs_eq(eq_Le, [eq_alphaa, eq_Dva], vdict)
Out[7]:
In [8]:
#import theano
from sympy.printing.theanocode import theano_function
import numpy as np
We will now create two long lists of values representing T_g and n_g respectively and show how long it takes to compute ideal gas law values.
In [9]:
npoints = 10000
xmin = 290.
xmax = 310.
Tvals = np.arange(xmin, xmax, (xmax - xmin)/npoints)
xmin = 0.1
xmax = 0.5
nvals = np.arange(xmin, xmax, (xmax-xmin)/npoints)
In [10]:
%%time
# looping
expr = eq_ideal_gas_law.rhs.subs(Variable.__defaults__)
resvals0 = []
for i in range(len(Tvals)):
resvals0.append(expr.subs({T_g: Tvals[i], n_g: nvals[i]}))
In [11]:
%%time
# Using theano
f1 = theano_function([T_g, n_g], [eq_ideal_gas_law.rhs.subs(Variable.__defaults__)], dims={T_g:1, n_g:1})
resvals1 = f1(Tvals,nvals)
In [12]:
list(resvals0) == list(resvals1)
Out[12]:
Both approaches give identical results, but theano_function
makes it a lot faster.
In [13]:
from sympy import nsolve
In [14]:
vdict = Variable.__defaults__.copy()
vdict[Pr] = 0.71
vdict[Re_c] = 3000.
vdict[Nu] = 1000.
expr = eq_Nu_forced_all.subs(vdict)
nsolve(expr, 1000.)
Out[14]:
Now applying to a long list of Nu-values:
In [15]:
npoints = 100
xmin = 1000.
xmax = 1200.
Nuvals = np.arange(xmin, xmax, (xmax - xmin)/npoints)
In [16]:
%%time
# Solving for a range of Nu values
vdict = Variable.__defaults__.copy()
vdict[Pr] = 0.71
vdict[Re_c] = 3000.
resvals = []
for Nu1 in Nuvals:
vdict[Nu] = Nu1
resvals.append(nsolve(eq_Nu_forced_all.subs(vdict), 1000.))
We will now again use a theano function to make it faster. First we import optimize from scipy and preapre the theano_function:
In [17]:
import scipy.optimize as sciopt
vdict = Variable.__defaults__.copy()
vdict[Pr] = 0.71
vdict[Re_c] = 3000.
expr = eq_Nu_forced_all.subs(vdict)
expr1 = expr.rhs - expr.lhs
fun_tf = theano_function([Re, Nu], [expr1], dims={Nu:1, Re:1})
x0vals = np.full(Nuvals.shape, fill_value=2000.) # array of same shape as Nuvals, with initial guess
In [18]:
%%time
# Solving for a range of Nu values
resvals1 = sciopt.fsolve(fun_tf, args=Nuvals, x0=x0vals)
In [19]:
np.mean(abs((resvals - resvals1)/resvals))
Out[19]:
Using theano and scipy makes it 2 orders of magnitude faster and the results are different only by 10$^{-10}$%! Note, however, that scipy gets slowed down for large arrays, so it is more efficient to re-run it repreatedly with subsections of the arra:
In [20]:
npoints = 1000
xmin = 1000.
xmax = 1200.
Nuvals = np.arange(xmin, xmax, (xmax - xmin)/npoints)
x0vals = np.full(Nuvals.shape, fill_value=2000.)
In [21]:
%%time
# Solving for a range of Nu values
resvals1 = sciopt.fsolve(fun_tf, args=Nuvals, x0=x0vals)
We will now test that we can process Nuvals bit by bit and re-create it consistently:
In [22]:
# Solving for a range of Nu values
imax = len(Nuvals)
i0 = 0
idiff = 100
i1 = i0
resvals2 = []
while i1 < imax - 1:
i0 = i1 # note that resvals[0:2] + resvals[2:4] = resvals[0:4]
i1 = min(i0+idiff, imax)
resvals0 = Nuvals[i0:i1]
resvals2 = np.append(resvals2,resvals0)
print(list(resvals2) == list(Nuvals))
Now we will run fsolve for portions of Nuvals bit by bit:
In [23]:
%%time
# Solving for a range of Nu values
imax = len(Nuvals)
i0 = 0
idiff = 100
i1 = i0
resvals2 = []
while i1 < imax - 1:
i0 = i1 # note that resvals[0:2] + resvals[2:4] = resvals[0:4]
i1 = min(i0+idiff, imax)
resvals0 = sciopt.fsolve(fun_tf, args=Nuvals[i0:i1], x0=x0vals[i0:i1])
resvals2 = np.append(resvals2,resvals0)
In [24]:
np.mean(abs((resvals1 - resvals2)/resvals1))
Out[24]:
It is strange that resvals1 and resvals2 are different at all, but anyway, it is clear that slicing the data in relatively small portions is important to keep scipy.optimize.fsolve
time-efficient.
In [25]:
from sympy.utilities.autowrap import autowrap
In [26]:
from sympy import symbols
x, y, z = symbols('x y z')
expr = ((x - y + z)**(13)).expand()
autowrap_func = autowrap(expr)
In [27]:
%%time
autowrap_func(1, 4, 2)
Out[27]:
In [28]:
%%time
expr.subs({x:1, y:4, z:2})
Out[28]:
Use of autowrap
made the calculation 3 orders of magnitude faster than substitution of values into the original expression!
Another way is to use binary_function
:
In [29]:
from sympy.utilities.autowrap import binary_function
f = binary_function('f', expr)
In [30]:
%%time
f(x,y,z).evalf(2, subs={x:1, y:4, z:2})
Out[30]:
However, for the above example, binary_function
was 2 orders of magnitude slower than autowrap
.
In [ ]: