TaylorSeries.jl
can be thought of as a polynomial algebraic manipulator in one or more
variables; these two cases are treated separately. Three new types are defined,
Taylor1
, HomogeneousPolynomial
and TaylorN
, which correspond to
expansions in one independent variable, homogeneous polynomials of various variables, and the polynomial
series in many independent variables, respectively. These types are subtypes
of Number
and are defined parametrically.
The package is loaded as usual:
In [1]:
using TaylorSeries
Taylor expansions in one variable are represented by the Taylor1
type, which
consists of a vector of coefficients (field coeffs
) and the maximum
order considered for the expansion (field order
). The
coefficients are arranged in ascending order with respect to the power of the
independent variable, so that
coeffs[1]
is the constant term, coeffs[2]
gives the first order term,
etc. This is a dense representation of the polynomial.
The order of the polynomial can be
omitted in the constructor, which is then fixed from the length of the
vector of coefficients; otherwise, the maximum
of the length of the vector of coefficients and the given integer is taken.
In [2]:
Taylor1([1, 2, 3]) # Polynomial of order 2 with coefficients 1, 2, 3
Out[2]:
In [3]:
Taylor1([0.0, 1im]) # Also works with complex numbers
Out[3]:
In [4]:
affine(a) = a + taylor1_variable(typeof(a),5) ## a + t of order 5
Out[4]:
In [5]:
t = affine(0.0) # Independent variable `t`
Out[5]:
Note that the information about the maximum order considered is displayed using a big-O notation.
The definition of affine(a)
uses the function taylor1_variable
, which is a
shortcut to define the independent variable of a Taylor expansion,
with a given type and given order. As we show below, this is one of the
easiest ways to work with the package.
The usual arithmetic operators (+
, -
, *
, /
, ^
, ==
) have been
extended to work with the Taylor1
type, including promotions that involve
Number
s. The operations return a valid Taylor expansion with the same
maximum order; compare the last example below, where this is not possible:
In [6]:
t*(3t+2.5)
Out[6]:
In [7]:
1/(1-t)
Out[7]:
In [8]:
t*(t^2-4)/(t+2)
Out[8]:
In [9]:
tI = im*t
Out[9]:
In [10]:
t^6 # order is 5
Out[10]:
In [11]:
(1-t)^3.2
Out[11]:
In [12]:
(1+t)^t
Out[12]:
If no valid Taylor expansion can be computed, an error is thrown.
In [13]:
1/t
In [14]:
t^3.2
Several elementary functions have been implemented; these compute their
coefficients recursively. So far, these functions are exp
, log
, sqrt
, sin
, cos
and tan
;
more will be added in the future. Note that this way of obtaining the
Taylor coefficients is not the laziest way, in particular for many independent
variables. Yet, it is quite efficient, especially for the integration of
ordinary differential equations, which is among the applications we have in mind.
In [15]:
exp(t)
Out[15]:
In [16]:
log(1-t)
Out[16]:
In [17]:
sqrt(t)
In [18]:
sqrt(1 + t)
Out[18]:
In [19]:
imag(exp(tI)')
Out[19]:
In [20]:
real(exp(Taylor1([0.0,1im],17))) - cos(Taylor1([0.0,1.0],17)) == 0.0
Out[20]:
In [21]:
convert(Taylor1{Rational{Int64}}, exp(t))
Out[21]:
Differentiating and integrating is straightforward for polynomial expansions in
one variable. The last coefficient of a derivative is set to zero to keep the
same order as the original polynomial; for the integral, an
integration constant may be set to a different value (the default is zero). The
order of the resulting polynomial is not changed. The $n$-th ($n \ge 0$)
derivative is obtained using deriv(a,n)
, where a
is a Taylor series;
the default is $n=1$.
In [22]:
diffTaylor(exp(t))
Out[22]:
In [23]:
integTaylor(exp(t))
Out[23]:
In [24]:
integTaylor( exp(t), 1.0)
Out[24]:
In [25]:
integTaylor( diffTaylor( exp(-t)), 1.0 ) == exp(-t)
Out[25]:
In [26]:
deriv( exp(affine(1.0))) == exp(1.0) # deriv of exp(1+t) at t=0
Out[26]:
In [27]:
deriv( exp(affine(1.0)), 5) == exp(1.0) # Fifth derivative of `exp(1+t)`
Out[27]:
To evaluate a Taylor series at a point, Horner's rule is used via the function
evaluate(a::Taylor, dt::Number)
. Here, $dt$ is the increment from
the point $t_0$ where the Taylor expansion is calculated, i.e., the series
is evaluated at $t = t_0 + dt$. Omitting $dt$ corresponds to $dt = 0$.
In [28]:
evaluate(exp(affine(1.0))) - e # exp(t) around t0=1 (order 5), evaluated there (dt=0)
Out[28]:
In [29]:
evaluate(exp(t), 1) - e # exp(t) around t0=0 (order 5), evaluated at t=1
Out[29]:
In [30]:
evaluate( exp( taylor1_variable(17) ), 1) - e # exp(t) around t0=0 (order 17), evaluated at t=1
0.0
Out[30]:
In [31]:
tBig = Taylor1([zero(BigFloat),one(BigFloat)],50) # With BigFloats
Out[31]:
In [32]:
eBig = evaluate( exp(tBig), one(BigFloat) )
Out[32]:
In [33]:
e - eBig
Out[33]:
A polynomial in $N>1$ variables can be represented in (at least) two ways: As a vector whose coefficients are homogeneous polynomials of fixed degree, or as a vector whose coefficients are polynomials in $N-1$ variables. We have opted to implement the first option, which seems to show better performance. An elegant (lazy) implementation of the second representation was discussed on the julia-users list.
TaylorN
is thus constructed as a vector of parameterized homogeneous polynomials
defined by the type HomogeneousPolynomial
, which in turn is a vector of
coefficients of given order (degree). This implementation imposes that the user
has to specify the (maximum) order and the number of independent
variables, which is done using the set_variables(names)
function.
names
is a string consisting of the desired output names of the variables,
separated by spaces. A vector of the resulting Taylor variables is returned:
In [34]:
x, y = set_variables("x y")
Out[34]:
In [35]:
x
Out[35]:
In [36]:
typeof(x)
Out[36]:
In [37]:
x.order
Out[37]:
In [38]:
x.coeffs
Out[38]:
As shown, the resulting objects are of TaylorN{Float64}
type. There is an optional order
keyword argument for set_variables
:
In [39]:
set_variables("x y", order=10)
Out[39]:
Numbered variables are also available by specifying a single
variable name and the optional keyword argument numvars
:
In [40]:
set_variables("α", numvars=3)
Out[40]:
The function `show_params_TaylorN()
displays the current values of the parameters, in an info block.
In [41]:
show_params_TaylorN()
Internally, changing these parameters defines dictionaries that
translate the position of the coefficients of a HomogeneousPolynomial
into the corresponding
multi-variable monomials. Fixing these values from the start is imperative.
The easiest way to construct a TaylorN
object is by defining symbols for
the independent variables, as above. Again, the Taylor expansions are implemented
around 0 for all variables; if the expansion
is needed around a different value, the trick is a simple translation of
the corresponding
independent variable $x \to x+a$.
Other ways of constructing TaylorN
polynomials involve using HomogeneousPolynomial
objects directly, which is uncomfortable:
In [42]:
set_variables("x", numvars=2);
In [43]:
HomogeneousPolynomial([1,-1])
Out[43]:
In [44]:
TaylorN( [HomogeneousPolynomial([1,0]), HomogeneousPolynomial([1,2,3])], 4)
Out[44]:
As before, the usual arithmetic operators (+
, -
, *
, /
, ^
, ==
)
have been extended to work with TaylorN
objects, including the appropriate
promotions to deal with numbers. (Some of the arithmetic operations have
also been extended for
HomogeneousPolynomial
, whenever the result is a HomogeneousPolynomial
;
division, for instance, is not extended.) Also, the elementary functions have been
implemented, again by computing their coefficients recursively:
In [45]:
x, y = set_variables("x y", order=10);
In [46]:
exy = exp(x+y)
Out[46]:
The function get_coeff(a,v)
gives the coefficient of a
that corresponds to the monomial
specified by the vector of powers v
:
In [47]:
get_coeff(exy, [3,5]) == 1/720
Out[47]:
In [48]:
rationalize(get_coeff(exy, [3,5]))
Out[48]:
Partial differentiation is also implemented for TaylorN
objects,
through diffTaylor
; integration is yet to be implemented.
In [49]:
f(x,y) = x^3 + 2x^2 * y - 7x + 2
Out[49]:
In [50]:
g(x,y) = y - x^4
Out[50]:
In [51]:
diffTaylor( f(x,y), 1 ) # partial derivative with respect to 1st variable
Out[51]:
In [52]:
diffTaylor( g(x,y), 2 )
Out[52]:
In [53]:
diffTaylor( g(x,y), 3 ) # error, since we are dealing with 2 variables
evaluate
can also be used for TaylorN
objects, using it on vectors of
numbers (Real
or Complex
); the length of the vector must coincide with the number
of independent variables.
In [54]:
evaluate(exy, [.1,.02]) == e^0.12
Out[54]:
Functions to compute the gradient, Jacobian and
Hessian have also been implemented. Using the
functions $f(x,y) = x^3 + 2x^2 y - 7 x + 2$ and $g(x,y) = y-x^4$ defined above,
we may use gradient
or ∇
(\nabla+TAB
); the results are of
type Array{TaylorN{T},1}
. To compute the Jacobian or Hessian of a vector field
evaluated at a point, we use jacobian
and hessian
:
In [55]:
f1 = f(x,y)
Out[55]:
In [56]:
g1 = g(x,y)
Out[56]:
In [57]:
gradient( g1 )
Out[57]:
In [58]:
∇(f1)
Out[58]:
In [59]:
fg = f1-g1-2*f1*g1
Out[59]:
In [60]:
jacobian([f1,g1], [2,1])
Out[60]:
In [61]:
hessian(fg, [1.0,1.0])
Out[61]:
Euler proved the following four-square identity:
\begin{eqnarray} (a_1+a_2+a_3+a_4) \cdot (b_1+b_2+b_3+b_4) &=& (a_1 b_1 - a_2 b_2 - a_3 b_3 -a_4 b_4)^2 \\ & & + (a_1 b_2 - a_2 b_1 - a_3 b_4 -a_4 b_3)^2 \\ & & + (a_1 b_3 - a_2 b_4 - a_3 b_1 -a_4 b_2)^2 \\ & & + (a_1 b_4 - a_2 b_3 - a_3 b_2 -a_4 b_1)^2. \end{eqnarray}The following code checks this; it can also be found in the test suite for the package.
We first define the variables:
In [62]:
# Define the variables α₁, ..., α₄, β₁, ..., β₄
make_variable(name, index::Int) = string(name, TaylorSeries.subscriptify(index))
Out[62]:
In [63]:
variable_names = [make_variable("α", i) for i in 1:4];
append!(variable_names, [make_variable("β", i) for i in 1:4]);
In [64]:
# Create the Taylor objects (order 4, numvars=8)
a1, a2, a3, a4, b1, b2, b3, b4 = set_variables(variable_names, order=4);
In [65]:
a1
Out[65]:
In [66]:
b1
Out[66]:
Now we compute each term appearing in the above equation, and compare them
In [67]:
# left-hand side
lhs1 = a1^2 + a2^2 + a3^2 + a4^2;
lhs2 = b1^2 + b2^2 + b3^2 + b4^2;
lhs = lhs1 * lhs2
Out[67]:
In [68]:
# right-hand side
rhs1 = (a1*b1 - a2*b2 - a3*b3 - a4*b4)^2;
rhs2 = (a1*b2 + a2*b1 + a3*b4 - a4*b3)^2;
rhs3 = (a1*b3 - a2*b4 + a3*b1 + a4*b2)^2;
rhs4 = (a1*b4 + a2*b3 - a3*b2 + a4*b1)^2;
rhs = rhs1 + rhs2 + rhs3 + rhs4
Out[68]:
In [69]:
lhs == rhs
Out[69]:
The identity is thus satisfied.
Richard J. Fateman, from Berkley, proposed as a stringent test
of polynomial multiplication
the evaluation of $s \cdot (s+1)$, where $s := (1+x+y+z+w)^{20}$. This is
implemented in
the function fateman1
below. We also evaluate the alternative form $s^2+s$ in fateman2
,
which involves fewer operations (and makes a fairer comparison to what
Mathematica does). Below we have used Julia v0.4.
In [70]:
set_variables("x", numvars=4, order=40);
In [71]:
function fateman1(degree::Int)
T = Int128
oneH = HomogeneousPolynomial(one(T), 0)
# s = 1 + x + y + z + w
s = TaylorN( [oneH, HomogeneousPolynomial([one(T),one(T),one(T),one(T)],1)], degree )
s = s^degree
# s is converted to order 2*ndeg
s = TaylorN(s, 2*degree)
s * ( s+TaylorN(oneH, 2*degree) )
end
Out[71]:
In [72]:
@time f1 = fateman1(0);
In [73]:
@time f1 = fateman1(20);
Another implementation of the same:
In [74]:
function fateman2(degree::Int)
T = Int128
oneH = HomogeneousPolynomial(one(T), 0)
# s = 1 + x + y + z + w
s = TaylorN( [oneH, HomogeneousPolynomial([one(T),one(T),one(T),one(T)],1)], degree )
s = s^degree
# s is converted to order 2*ndeg
s = TaylorN(s, 2*degree)
return s^2 + s
end
Out[74]:
In [75]:
@time f2 = fateman2(0);
In [76]:
@time f2 = fateman2(20);
In [77]:
get_coeff(f2,[1,6,7,20]) # coef x^1 y^6 z^7 w^{20}
Out[77]:
In [78]:
sum(TaylorSeries.size_table) # number of distinct monomials
Out[78]:
The tests above show the necessity of using Int128
, that
fateman2
is about twice as fast as fateman1
, and that the series has 135751 monomials on 4 variables.
Mathematica (version 10.2) requires 3.137023 seconds. Then,
with TaylorSeries v0.1.2, our implementation of fateman1
is about 20% faster,
and fateman2
is more than a factor 2 faster. (The original test by Fateman
corresponds to fateman1
, which avoids optimizations in ^2
.)
In [ ]: