A SymPy tutorial

This tutorial provides an introduction to using SymPy within Julia. It owes an enormous debt to the tutorial for using SymPy within Python which may be found here. The overall structure and many examples are taken from that work, with adjustments and additions to illustrate the differences due to using SymPy within Julia.

This tutorial can be read as an IJulia notebook here.

After installing SymPy, which is discussed in the package's README file, we must first load it into Julia with the standard command using:


In [ ]:
using SymPy

The start up time is a bit lengthy.

Symbols

At the core of SymPy is the introduction of symbolic variables that differ quite a bit from Julia's variables. Symbolic variables do not immediately evaluate to a value, rather the "symbolicness" propagates when interacted with. To keep things manageable, SymPy does some simplifications along the way.

Symbolic expressions are primarily of the Sym type and can be constructed in the standard way:


In [ ]:
x = Sym("x")


Out[ ]:
$$x$$

This creates a symbolic object x, which can be manipulated through further function calls.

There is the @syms macro that makes creating multiple variables a bit less typing, as it creates variables in the local scope – no assignment is necessary. Compare these similar ways to create symbolic variables:


In [ ]:
@syms a b c
a,b,c = Sym("a,b,c")


Out[ ]:
(a,b,c)

Assumptions

Finally, there is the symbols constructor for producing symbolic objects. With symbols it is possible to pass assumptions onto the variables. A list of possible assumptions is here. Some examples are:


In [ ]:
u = symbols("u")
x = symbols("x", real=true)
y1, y2 = symbols("y1, y2", positive=true)
alpha = symbols("alpha", integer=true, positive=true)


Out[ ]:
$$\alpha$$

As seen, the symbols function can be used to make one or more variables with zero, one or more assumptions.

We jump ahead for a second to illustrate, but here we see that solve will respect these assumptions, by failing to find solutions to these equations:


In [ ]:
solve(x^2 + 1)   # ±i are not real


Out[ ]:
\begin{bmatrix}\end{bmatrix}

In [ ]:
solve(y1 + 1)    # -1 is not positive


Out[ ]:
\begin{bmatrix}\end{bmatrix}

The @syms macro can also have assumptions passed in as follows:


In [ ]:
@syms u1 positive=true u2 positive=true
solve(u1 + u2)  # empty, though solving u1 - u2 is not.


Out[ ]:
0-element Array{Dict{SymPy.Sym,SymPy.Sym},1}

As can be seen, there are several ways to create symbolic values. One caveat is that one can't use Syms to create a variable from a function name in Base.

Special constants

Julia has its math constants, like pi and e, SymPy as well. A few of these have Julia counterparts provided by SymPy. For example, these three constants are defined (where oo is for infinity):


In [ ]:
PI, E, oo


Out[ ]:
(pi,E,oo)

(The pretty printing of SymPy objects does not work for tuples.)

Numeric values themselves can be symbolic. This example shows the difference. The first asin call dispatches to Julia's asin function, the second to SymPy's:


In [ ]:
[asin(1), asin(Sym(1))]


Out[ ]:
\begin{bmatrix}1.5707963267949\\\frac{\pi}{2}\end{bmatrix}

Substitution

SymPy provides a means to substitute values in for the symbolic expressions. The specification requires an expression, a variable in the expression to substitute in for, and a new value. For example, this is one way to make a polynomial in a new variable:


In [ ]:
@syms x y
ex = x^2 + 2x + 1
subs(ex, x, y)


Out[ ]:
$$y^{2} + 2 y + 1$$

Substitution can also be numeric:


In [ ]:
subs(ex, x, 0)


Out[ ]:
$$1$$

The output has no free variables, but is still symbolic.

Expressions with more than one variable can have multiple substitutions, where each is expressed as a tuple:


In [ ]:
x,y,z = symbols("x,y,z")
ex = x + y + z
subs(ex, (x,1), (y,pi))


Out[ ]:
$$z + 1 + \pi$$

Pairs can be used for substitution with:


In [ ]:
subs(ex, x=>1, y=>pi)


Out[ ]:
$$z + 1 + \pi$$

And, perhaps more conveniently, symbolic objects have their call method overloaded to allow substitution:


In [ ]:
ex(x=>1, y=>pi)


Out[ ]:
$$z + 1 + \pi$$

A straight call is also possble, where the order of the variables is determined by free_symbols:


In [ ]:
ex(1, pi)


Out[ ]:
$$y + 1 + \pi$$

When using the pipeline operator, |>, is convenient, there is a curried form that allows the expression to be implicit:


In [ ]:
ex |> subs(x, 1)


Out[ ]:
$$y + z + 1$$

As subs is very similar in spirit to Julia's replace function, that alias is provided:


In [ ]:
ex |> replace(y, pi)


Out[ ]:
$$x + z + \pi$$

Conversion from symbolic to numeric

SymPy provides two identical means to convert a symbolic math expression to a number. One is evalf, the other N. Within Julia we decouple this, using N to also convert to a Julian value and evalf to leave the conversion as a symbolic object. The N function converts symbolic integers, rationals, irrationals, and complex values, while attempting to find an appropriate Julia type for the value.

To see the difference, we use both on PI:


In [ ]:
N(PI)  # floating-point value


Out[ ]:
3.141592653589793

Whereas, while this may look the same, it is still symbolic:


In [ ]:
evalf(PI)


Out[ ]:
$$3.14159265358979$$

Both N and evalf allow for a precision argument to be passed through the second argument. This is how 30 digits of $\pi$ can be extracted:


In [ ]:
N(PI, 30)


Out[ ]:
3.1415926535897932384626433832793

Here N produces a BigFloat with a precision to match (basically) the specified number of digits. Whereas


In [ ]:
evalf(PI, 30)


Out[ ]:
$$3.14159265358979323846264338328$$

leaves the value as a symbolic object with 30 digits of accuracy.

Explicit conversion via convert(T, ex) can also be done, and is necessary at times if N does not give the desired type.

Algebraic expressions

SymPy overloads many of Julia's functions to work with symbolic objects, such as seen above with asin. The usual mathematical operations such as +, *, -, / etc. work through Julia's promotion mechanism, where numbers are promoted to symbolic objects, others dispatch internally to related SymPy functions.

In most all cases, thinking about this distinction between numbers and symbolic numbers is unnecessary, as numeric values passed to SymPy functions are typically promoted to symbolic expressions. This conversion will take math constants to their corresponding SymPy counterpart, rational expressions to rational expressions, and floating point values to floating point values. However there are edge cases. An expression like 1//2 * pi * x will differ from the seemingly identical 1//2 * (pi * x). The former will produce a floating point value from 1//2 * pi before being promoted to a symbolic instance. Using the symbolic value PI makes this expression work either way.

Most of Julia's mathematical functions are overloaded to work with symbolic expressions. Julia's generic definitions are used, as possible. This also introduces some edge cases. For example, x^(-2) will balk due to the negative, integer exponent, but either x^(-2//1) or x^Sym(-2) will work as expected, as the former call first dispatches to a generic defintion, but the latter two expressions do not.

SymPy makes it very easy to work with polynomial and rational expressions. First we create some variables:


In [ ]:
@syms x y z


Out[ ]:
(x,y,z)

The expand, factor, collect, and simplify functions

A typical polynomial expression in a single variable can be written in two common ways, expanded or factored form. Using factor and expand can move between the two.

For example,


In [ ]:
p = x^2 + 3x + 2
factor(p)


Out[ ]:
$$\left(x + 1\right) \left(x + 2\right)$$

Or


In [ ]:
expand(prod([(x-i) for i in 1:5]))


Out[ ]:
$$x^{5} - 15 x^{4} + 85 x^{3} - 225 x^{2} + 274 x - 120$$

The factor function factors over the rational numbers, so something like this with obvious factors is not finished:


In [ ]:
factor(x^2 - 2)


Out[ ]:
$$x^{2} - 2$$

When expressions involve one or more variables, it can be convenient to be able to manipulate them. For example, if we define q by:


In [ ]:
q = x*y + x*y^2 + x^2*y + x


Out[ ]:
$$x^{2} y + x y^{2} + x y + x$$

Then we can collect the terms by the variable x:


In [ ]:
collect(q, x)


Out[ ]:
$$x^{2} y + x \left(y^{2} + y + 1\right)$$

or the variable y:


In [ ]:
collect(q, y)


Out[ ]:
$$x y^{2} + x + y \left(x^{2} + x\right)$$

These are identical expressions, though viewed differently.

A more broad-brush approach is to let SymPy simplify the values. In this case, the common value of x is factored out:


In [ ]:
simplify(q)


Out[ ]:
$$x \left(x y + y^{2} + y + 1\right)$$

The simplify function attempts to apply the dozens of functions related to simplification that are part of SymPy. It is also possible to apply these functions one at a time, for example trigsimp does trigonometric simplifications.

The SymPy tutorial illustrates that expand can also result in simplifications through this example:


In [ ]:
expand((x + 1)*(x - 2) - (x - 1)*x)


Out[ ]:
$$-2$$

These methods are not restricted to polynomial expressions and will work with other expressions. For example, factor identifies the following as a factorable object in terms of the variable exp(x):


In [ ]:
factor(exp(2x) + 3exp(x) + 2)


Out[ ]:
$$\left(e^{x} + 1\right) \left(e^{x} + 2\right)$$

Rational expressions: apart, together, cancel

When working with rational expressions, SymPy does not do much simplification unless asked. For example this expression is not simplified:


In [ ]:
r = 1/x + 1/x^2


Out[ ]:
$$\frac{1}{x} + \frac{1}{x^{2}}$$

To put the terms of r over a common denominator, the together function is available:


In [ ]:
together(r)


Out[ ]:
$$\frac{1}{x^{2}} \left(x + 1\right)$$

The apart function does the reverse, creating a partial fraction decomposition from a ratio of polynomials:


In [ ]:
apart( (4x^3 + 21x^2 + 10x + 12) /  (x^4 + 5x^3 + 5x^2 + 4x))


Out[ ]:
$$\frac{2 x - 1}{x^{2} + x + 1} - \frac{1}{x + 4} + \frac{3}{x}$$

Some times SymPy will cancel factors, as here:


In [ ]:
top = (x-1)*(x-2)*(x-3)
bottom = (x-1)*(x-4)
top/bottom


Out[ ]:
$$\frac{\left(x - 3\right) \left(x - 2\right)}{x - 4}$$

(This might make math faculty a bit upset, but it is in line with student thinking.)

However, with expanded terms, the common factor of (x-1) is not cancelled:


In [ ]:
r = expand(top) / expand(bottom)


Out[ ]:
$$\frac{x^{3} - 6 x^{2} + 11 x - 6}{x^{2} - 5 x + 4}$$

The cancel function instructs SymPy to perform cancellations. It takes rational functions and puts them in a canonical $p/q$ form with no common (rational) factors and leading terms which are integers:


In [ ]:
cancel(r)


Out[ ]:
$$\frac{x^{2} - 5 x + 6}{x - 4}$$

Powers

The SymPy tutorial offers a thorough explanation on powers and which get simplified and under what conditions. Basically

  • $x^a x^b = x^{a+b}$ is always true. However
  • $x^a y^a=(xy)^a$ is only true with assumptions, such as $x,y \geq 0$ and $a$ is real, but not in general. For example, $x=y=-1$ and $a=1/2$ has $x^a \cdot y^a = i \cdot i = -1$, where as $(xy)^a = 1$.
  • $(x^a)^b = x^{ab}$ is only true with assumptions. For example $x=-1, a=2$, and $b=1/2$ gives $(x^a)^b = 1^{1/2} = 1$, whereas $x^{ab} = -1^1 = -1$.

We see that with assumptions, the following expression does simplify to $0$:


In [ ]:
@syms x y nonnegative=true a real=true
simplify(x^a * y^a - (x*y)^a)


Out[ ]:
$$0$$

However, without assumptions this is not the case


In [ ]:
x,y,a = symbols("x,y,a")
simplify(x^a * y^a - (x*y)^a)


Out[ ]:
$$x^{a} y^{a} - \left(x y\right)^{a}$$

The simplify function calls powsimp to simplify powers, as above. The powsimp function has the keyword argument force=true to force simplification even if assumptions are not specified:


In [ ]:
powsimp(x^a * y^a - (x*y)^a, force=true)


Out[ ]:
$$0$$

Trigonometric simplification

For trigonometric expressions, simplify will use trigsimp to simplify:


In [ ]:
theta = symbols("theta", real=true)
p = cos(theta)^2 + sin(theta)^2


Out[ ]:
$$\sin^{2}{\left (\theta \right )} + \cos^{2}{\left (\theta \right )}$$

Calling either simplify or trigsimp will apply the Pythagorean identity:


In [ ]:
simplify(p)


Out[ ]:
$$1$$

While often forgotten, the trigsimp function is, of course, aware of the double angle formulas:


In [ ]:
simplify(sin(2theta) - 2sin(theta)*cos(theta))


Out[ ]:
$$0$$

The expand_trig function will expand such expressions:


In [ ]:
expand_trig(sin(2theta))


Out[ ]:
$$2 \sin{\left (\theta \right )} \cos{\left (\theta \right )}$$

Coefficients

Returning to polynomials, there are a few functions to find various pieces of the polynomials. First we make a general quadratic polynomial:


In [ ]:
a,b,c,x = symbols("a, b, c, x") 
p = a*x^2 + b*x + c


Out[ ]:
$$a x^{2} + b x + c$$

The coeff(ex, monom) function will return the corresponding coefficient of the monomial:


In [ ]:
coeff(p, x^2) # a 
coeff(p, x)   # b


Out[ ]:
$$b$$

The constant can be found through substitution:


In [ ]:
p(x=>0)


Out[ ]:
$$c$$

Though one could use some trick like this to find all the coefficients:


In [ ]:
Sym[[coeff(p, x^i) for i in N(degree(p)):-1:1]; p(x=>0)]


Out[ ]:
\begin{bmatrix}a\\b\\c\end{bmatrix}

that is cumbersome, at best. SymPy has a function coeffs, but it is defined for polynomial types, so will fail on p:


In [ ]:
coeffs(p) # fails


Out[ ]:
KeyError("coeffs")

Polynomials are a special class in SymPy and must be constructed. The Poly constructor can be used. As there is more than one free variable in p, we specify the variable x below:


In [ ]:
q = Poly(p, x)
coeffs(q)


Out[ ]:
3-element Array{Any,1}:
 a
 b
 c

Polynomial roots: solve, real_roots, polyroots, nroots

SymPy provides functions to find the roots of a polynomial. In general, a polynomial with real coefficients of degree $n$ will have $n$ roots when multiplicities and complex roots are accounted for. The number or real roots is consequently between $0$ and $n$.

For a univariate polynomial expression (a single variable), the real roots, when available, are returned by real_roots. For example,


In [ ]:
real_roots(x^2 - 2)


Out[ ]:
\begin{bmatrix}- \sqrt{2}\\\sqrt{2}\end{bmatrix}

Unlike factor – which only factors over rational factors – real_roots finds the two irrational roots here. It is well known (the Abel-Ruffini theorem) that for degree 5 polynomials, or higher, it is not always possible to express the roots in terms of radicals. However, when the roots are rational SymPy can have success:


In [ ]:
p = (x-3)^2*(x-2)*(x-1)*x*(x+1)*(x^2 + x + 1)
real_roots(p)


Out[ ]:
\begin{bmatrix}-1\\0\\1\\2\\3\\3\end{bmatrix}

In this example, the degree of p is 8, but only the 6 real roots returned, the double root of $3$ is accounted for. The two complex roots of x^2 + x+ 1 are not considered by this function. The complete set of distinct roots can be found with solve:


In [ ]:
solve(p)


Out[ ]:
\begin{bmatrix}-1\\0\\1\\2\\3\\- \frac{1}{2} - \frac{\sqrt{3} i}{2}\\- \frac{1}{2} + \frac{\sqrt{3} i}{2}\end{bmatrix}

This finds the complex roots, but does not account for the double root. The roots function of SymPy does.

This particular function is not exported (as it conflicts with the roots function from Polynomials and Roots) but we can still access it using p[:roots]() or its alias polyroots.

Indexing with a symbol. When a symbolic expression is indexed by a symbol it returns a function which maps to a corresponding SymPy function. For example, p[:roots](args...) will call roots(p, args...) within SymPy. For methods of SymPy objects, the same is true, so if roots were a class method, then the call would resolve to p.roots(args...).

The output of calling roots will be a dictionary whose keys are the roots and values the multiplicity.


In [ ]:
polyroots(p)


Out[ ]:
\begin{equation*}\begin{cases}1 & \text{=>} &1\\- \frac{1}{2} - \frac{\sqrt{3} i}{2} & \text{=>} &1\\3 & \text{=>} &2\\0 & \text{=>} &1\\-1 & \text{=>} &1\\- \frac{1}{2} + \frac{\sqrt{3} i}{2} & \text{=>} &1\\2 & \text{=>} &1\\\end{cases}\end{equation*}

When exact answers are not provided, the polyroots call is contentless:


In [ ]:
p = x^5 - x + 1
polyroots(p)


Out[ ]:
\begin{equation*}\begin{cases}\end{cases}\end{equation*}

Calling solve seems to produce very little as well:


In [ ]:
rts = solve(p)


Out[ ]:
\begin{bmatrix}\operatorname{CRootOf} {\left(x^{5} - x + 1, 0\right)}\\\operatorname{CRootOf} {\left(x^{5} - x + 1, 1\right)}\\\operatorname{CRootOf} {\left(x^{5} - x + 1, 2\right)}\\\operatorname{CRootOf} {\left(x^{5} - x + 1, 3\right)}\\\operatorname{CRootOf} {\left(x^{5} - x + 1, 4\right)}\end{bmatrix}

But in fact, rts contains lots of information. We can extract numeric values quite easily with N:


In [ ]:
[N(r) for r in rts]     # or map(N, rts)


Out[ ]:
5-element Array{Any,1}:
         -1.1673     
 -0.181232-1.08395im 
 -0.181232+1.08395im 
  0.764884-0.352472im
  0.764884+0.352472im

These are numeric approximations to irrational values. For numeric approximations to polynomial roots, the nroots function is also provided, though with this call the answers are still symbolic:


In [ ]:
nroots(p)


Out[ ]:
\begin{bmatrix}-1.16730397826142\\-0.181232444469875 - 1.08395410131771 i\\-0.181232444469875 + 1.08395410131771 i\\0.764884433600585 - 0.352471546031726 i\\0.764884433600585 + 0.352471546031726 i\end{bmatrix}

The solve function

The solve function is more general purpose than just finding roots of univariate polynomials. The function tries to solve for when an expression is 0, or a set of expressions are all 0.

For example, it can be used to solve when $\cos(x) = \sin(x)$:


In [ ]:
solve(cos(x) - sin(x))


Out[ ]:
\begin{bmatrix}- \frac{3 \pi}{4}\\\frac{\pi}{4}\end{bmatrix}

Though there are infinitely many correct solutions, these are within a certain range.

The solveset function appears in version 1.0 of SymPy and is an intended replacement for solve. Here we see it gives all solutions:


In [ ]:
u = solveset(cos(x) - sin(x))


Out[ ]:
$$\left\{2 n \pi - \frac{3 \pi}{4}\; |\; n \in \mathbb{Z}\right\} \cup \left\{2 n \pi + \frac{\pi}{4}\; |\; n \in \mathbb{Z}\right\}$$

The output of solveset is a set, rather than a vector or dictionary. To get the values requires some work. For finite sets we collect the elements with elements:


In [ ]:
v = solveset(x^2 - 4)
elements(v)


Out[ ]:
2-element Array{Any,1}:
 -2
  2

The elements function does not work for more complicated sets, such as u. For these, the contains method may be useful.

Solving within Sympy has limits. For example, there is no symbolic solution here:


In [ ]:
solve(cos(x) - x)


Out[ ]:
PyError (:PyObject_Call) <type 'exceptions.NotImplementedError'>
NotImplementedError('multiple generators [x, cos(x)]\nNo algorithms are implemented to solve equation -x + cos(x)',)
  File "/Users/verzani/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/sympy/solvers/solvers.py", line 1053, in solve
    solution = _solve(f[0], *symbols, **flags)
  File "/Users/verzani/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/sympy/solvers/solvers.py", line 1619, in _solve
    raise NotImplementedError('\n'.join([msg, not_impl_msg % f]))

For such, a numeric method would be needed.

Though it can't solve everything, the solve function can also solve equations of a more general type. For example, here it is used to derive the quadratic equation:


In [ ]:
a,b,c  = symbols("a,b,c", real=true)
p = a*x^2 + b*x + c
solve(p, x)


Out[ ]:
\begin{bmatrix}\frac{1}{2 a} \left(- b + \sqrt{- 4 a c + b^{2}}\right)\\- \frac{1}{2 a} \left(b + \sqrt{- 4 a c + b^{2}}\right)\end{bmatrix}

The extra argument x is passed to solve so that solve knows which variable to solve for.

The solveset function is similar:


In [ ]:
solveset(p, x)


Out[ ]:
$$\left\{- \frac{b}{2 a} - \frac{1}{2 a} \sqrt{- 4 a c + b^{2}}, - \frac{b}{2 a} + \frac{1}{2 a} \sqrt{- 4 a c + b^{2}}\right\}$$

If the x value is not given, solveset will complain and solve tries to find a solution with all the free variables:


In [ ]:
solve(p)


Out[ ]:
1-element Array{Dict{SymPy.Sym,SymPy.Sym},1}:
 Dict(a=>-(b*x + c)/x^2)

Systems of equations can be solved as well. We specify them within a vector of expressions, [ex1, ex2, ..., exn] where a found solution is one where all the expressions are 0. For example, to solve this linear system: $2x + 3y = 6, 3x - 4y=12$, we have:


In [ ]:
x, y = symbols("x,y", real=true)
exs = [2x+3y-6, 3x-4y-12]
d = solve(exs)


Out[ ]:
\begin{equation*}\begin{cases}y & \text{=>} &- \frac{6}{17}\\x & \text{=>} &\frac{60}{17}\\\end{cases}\end{equation*}

We can "check our work" by plugging into each equation. We take advantage of how the subs function allows us to pass in a dictionary:


In [ ]:
map(ex -> subs(ex, d), exs)


Out[ ]:
\begin{bmatrix}0\\0\end{bmatrix}

In the previous example, the system had two equations and two unknowns. When that is not the case, one can specify the variables to solve for as a vector. In this example, we find a quadratic polynomial that approximates $\cos(x)$ near $0$:


In [ ]:
a,b,c,h = symbols("a,b,c,h", real=true)
p = a*x^2 + b*x + c
fn = cos
exs = [fn(0*h)-p(x=>0), fn(h)-p(x => h), fn(2h)-p(x => 2h)]
d = solve(exs, [a,b,c])


Out[ ]:
\begin{equation*}\begin{cases}a & \text{=>} &\frac{1}{2 h^{2}} \left(- 2 \cos{\left (h \right )} + \cos{\left (2 h \right )} + 1\right)\\b & \text{=>} &\frac{1}{2 h} \left(4 \cos{\left (h \right )} - \cos{\left (2 h \right )} - 3\right)\\c & \text{=>} &1\\\end{cases}\end{equation*}

Again, a dictionary is returned. The polynomial itself can be found by substituting back in for a, b, and c:


In [ ]:
quad_approx = subs(p, d)


Out[ ]:
$$1 + \frac{x}{2 h} \left(4 \cos{\left (h \right )} - \cos{\left (2 h \right )} - 3\right) + \frac{x^{2}}{2 h^{2}} \left(- 2 \cos{\left (h \right )} + \cos{\left (2 h \right )} + 1\right)$$

(Taking the limit as $h$ goes to 0 produces the answer $1 - x^2/2$.)

Finally for solve, we show one way to re-express the polynomial $a_2x^2 + a_1x + a_0$ as $b_2(x-c)^2 + b_1(x-c) + b_0$ using solve (and not, say, an expansion theorem.)


In [ ]:
n = 3
x, c = symbols("x,c")
as = Sym["a$i" for i in 0:(n-1)]
bs = Sym["b$i" for i in 0:(n-1)]
p = sum([as[i+1]*x^i for i in 0:(n-1)])
q = sum([bs[i+1]*(x-c)^i for i in 0:(n-1)])
solve(p-q, bs)


Out[ ]:
\begin{equation*}\begin{cases}b_{0} & \text{=>} &a_{0} + a_{1} c + a_{2} c^{2}\\b_{1} & \text{=>} &a_{1} + 2 a_{2} c\\b_{2} & \text{=>} &a_{2}\\\end{cases}\end{equation*}

Solving using logical operators

The solve function does not need to just solve ex = 0. There are other means to specify an equation. Ideally, it would be nice to say ex1 == ex2, but the interpretation of == is not for this. Rather, SymPy introduces Eq for equality. So this expression


In [ ]:
solve(Eq(x, 1))


Out[ ]:
\begin{bmatrix}1\end{bmatrix}

gives 1, as expected from solving x == 1.

In addition to Eq, there are Lt, Le, Ge, Gt. The Unicode operators are not aliased to these, but there are alternatives \ll[tab], \le[tab], \Equal[tab], \ge[tab], \gg[tab] and \neg[tab] to negate.

So, the above could have been written with the following nearly identical expression, though it is entered with \Equal[tab].


In [ ]:
solve(x  1)


Out[ ]:
\begin{bmatrix}1\end{bmatrix}

Here is an alternative way of asking a previous question on a pair of linear equations:


In [ ]:
x, y = symbols("x,y", real=true)
exs = [2x+3y  6, 3x-4y  12]    ## Using \Equal[tab]
d = solve(exs)


Out[ ]:
\begin{equation*}\begin{cases}y & \text{=>} &- \frac{6}{17}\\x & \text{=>} &\frac{60}{17}\\\end{cases}\end{equation*}

Plotting

The Plots package allows many 2-dimensional plots of SymPy objects to be agnostic as to a backend plotting package. SymPy provides recipes that allow symbolic expressions to be used where functions are part of the Plots interface. [See the help page for sympy_plotting.]

In particular, the following methods of plot are defined:

  • plot(ex::Sym, a, b) will plot the expression of single variable over the interval [a,b]
  • plot!(ex::Sym, a, b) will add to the current plot a plot of the expression of single variable over the interval [a,b]
  • plot(exs::Vector{Sym}, a, b) will plot each expression over [a,b]
  • plot(ex1, ex2, a, b) will plot a parametric plot of the two expressions over the interval [a,b].
  • contour(xs, ys, ex::Sym) will make a contour plot of the expression of two variables over the grid specifed by the xs and ys.
  • surface(xs, ys, ex::Sym) will make a surface plot of the expression of two variables over the grid specifed by the xs and ys.

For example:


In [ ]:
x = symbols("x")
using Plots
plotly()
#
plot(x^2 - 2, -2,2)


Out[ ]:

Or a parametric plot:


In [ ]:
plot(sin(2x), cos(3x), 0, 4pi)


Out[ ]:

For plotting with other plotting packages, it is generally faster to first call lambdify on the expression and then generate y values with the resulting Julia function.


In addition, with PyPlot a few other plotting functions from SymPy are available from its interface to MatplotLib:

  • plot_parametric_surface(ex1::Sym, ex2::Sym, ex3::Sym), (uvar, a0, b0), (vvar, a1, b1)) – make a surface plot of the expressions parameterized by the region [a0,b0] x [a1,b1]. The default region is [-5,5]x[-5,5] where the ordering of the variables is given by free_symbols(ex).
  • plot_implicit(predictate, (xvar, a0, b0), (yvar, a1, b1)) – make an implicit equation plot of the expressions over the region [a0,b0] x [a1,b1]. The default region is [-5,5]x[-5,5] where the ordering of the variables is given by free_symbols(ex). To create predicates from the variable, the functions Lt, Le, Eq, Ge, and Gt can be used, as with Lt(x*y, 1). For infix notation, unicode operators can be used: \ll<tab>, \le<tab>, \Equal<tab>, \ge<tab>, and \gg<tab>. For example, x*y ≪ 1. To combine terms, the unicode \vee<tab> (for "or"), \wedge<tab> (for "and") can be used.

Calculus

SymPy has many of the basic operations of calculus provided through a relatively small handful of functions.

Limits

Limits are computed by the limit function which takes an expression, a variable and a value, and optionally a direction specified by either dir="+" or dir="-".

For example, this shows Gauss was right:


In [ ]:
limit(sin(x)/x, x, 0)


Out[ ]:
$$1$$

Alternatively, the second and third arguments can be specified as a pair:


In [ ]:
limit(sin(x)/x, x=>0)


Out[ ]:
$$1$$

Limits at infinity are done by using oo for $\infty$:


In [ ]:
limit((1+1/x)^x, x => oo)


Out[ ]:
$$e$$

This example computes what L'Hopital reportedly paid a Bernoulli for


In [ ]:
a = symbols("a", positive=true)
ex = (sqrt(2a^3*x-x^4) - a*(a^2*x)^(1//3)) / (a - (a*x^3)^(1//4))


Out[ ]:
$$\frac{- a^{\frac{5}{3}} \root{3}{x} + \sqrt{2 a^{3} x - x^{4}}}{- \root{4}{a} \root{4}{x^{3}} + a}$$

Substituting $x=a$ gives an indeterminate form:


In [ ]:
ex(x=>a)         # or subs(ex, x, a)


Out[ ]:
$$\mathrm{NaN}$$

We can see it is of the form $0/0$:


In [ ]:
subs(denom(ex), x, a), subs(numer(ex), x, a)


Out[ ]:
(0,0)

And we get


In [ ]:
limit(ex, x => a)


Out[ ]:
$$\frac{16 a}{9}$$

In a previous example, we defined quad_approx:


In [ ]:
quad_approx


Out[ ]:
$$1 + \frac{x}{2 h} \left(4 \cos{\left (h \right )} - \cos{\left (2 h \right )} - 3\right) + \frac{x^{2}}{2 h^{2}} \left(- 2 \cos{\left (h \right )} + \cos{\left (2 h \right )} + 1\right)$$

The limit as h goes to $0$ gives 1 - x^2/2, as expected:


In [ ]:
limit(quad_approx, h => 0)


Out[ ]:
$$- \frac{x^{2}}{2} + 1$$

Left and right limits

The limit is defined when both the left and right limits exist and are equal. But left and right limits can exist and not be equal. The sign function is $1$ for positive $x$, $-1$ for negative $x$ and $0$ when $x$ is 0. It should not have a limit at $0$:


In [ ]:
limit(sign(x), x => 0)


Out[ ]:
$$1$$

Oops. Well, the left and right limits are different anyways:


In [ ]:
limit(sign(x), x => 0, dir="-"), limit(sign(x), x => 0, dir="+")


Out[ ]:
(-1,1)

(The limit function finds the right limit by default. To be careful, either plot or check that both the left and right limit exist and are equal.)

Operator interface

For univariate functions there is an "operator" interface, where we pass a function object as the first argument and the value for c as the second (the variable is implicit, as f has only one).


In [ ]:
f(x) = sin(5x)/x
limit(f, 0)


Out[ ]:
$$5$$

Numeric limits

The limit function uses the Gruntz algorithm. It is far more reliable then simple numeric attempts at limits. An example of Gruntz is the right limit at $0$ of the function:


In [ ]:
f(x) = 1/x^(log(log(log(log(1/x)))) - 1)


Out[ ]:
f (generic function with 1 method)

A numeric attempt might be done along these lines:


In [ ]:
hs = [10.0^(-i) for i in 6:16]
ys = [f(h) for h in hs]
[hs ys]


Out[ ]:
11x2 Array{Any,2}:
 1.0e-6   6.14632e-7 
 1.0e-7   1.42981e-7 
 1.0e-8   3.43858e-8 
 1.0e-9   8.52992e-9 
 1.0e-10  2.17687e-9 
 1.0e-11  5.70097e-10
 1.0e-12  1.52866e-10
 1.0e-13  4.18839e-11
 1.0e-14  1.17057e-11
 1.0e-15  3.33197e-12
 1.0e-16  9.64641e-13

With a values appearing to approach $0$. However, in fact these values will ultimately head off to $\infty$:


In [ ]:
limit(f(x), x, 0, dir="+")


Out[ ]:
$$\infty$$

Derivatives

One could use limits to implement the definition of a derivative:


In [ ]:
x, h = symbols("x,h")
f(x) = exp(x)*sin(x)
limit((f(x+h) - f(x)) / h, h, 0)


Out[ ]:
$$e^{x} \sin{\left (x \right )} + e^{x} \cos{\left (x \right )}$$

However, it would be pretty inefficient, as SymPy already does a great job with derivatives. The diff function implements this. The basic syntax is diff(ex, x) to find the first derivative in x of the expression in ex, or its generalization to $k$th derivatives with diff(ex, x, k).

The same derivative computed above by a limit could be found with:


In [ ]:
diff(f(x), x)


Out[ ]:
$$e^{x} \sin{\left (x \right )} + e^{x} \cos{\left (x \right )}$$

Similarly, we can compute other derivatives:


In [ ]:
diff(x^x, x)


Out[ ]:
$$x^{x} \left(\log{\left (x \right )} + 1\right)$$

Or


In [ ]:
diff(exp(-x^2), x, 2)


Out[ ]:
$$2 \left(2 x^{2} - 1\right) e^{- x^{2}}$$

As an alternate to specifying the number of derivatives, multiple variables can be passed to diff:


In [ ]:
diff(exp(-x^2), x, x, x)     # same as diff(..., x, 3)


Out[ ]:
$$4 x \left(- 2 x^{2} + 3\right) e^{- x^{2}}$$

This could include variables besides x.

The output is a simple expression, so diff can be composed with other functions, such as solve. For example, here we find the critical points where the derivative is $0$ of some rational function:


In [ ]:
f(x) = (12x^2 - 1) / (x^3)
diff(f(x), x) |> solve


Out[ ]:
\begin{bmatrix}- \frac{1}{2}\\\frac{1}{2}\end{bmatrix}

Operator version

SymPy provides an "operator" version of diff for univariate functions for convenience (diff(f::Function,k=1)=diff(f(x),x,k)):


In [ ]:
f(x) = exp(x)*cos(x)
diff(f, 2)


Out[ ]:
$$- 2 e^{x} \sin{\left (x \right )}$$

Partial derivatives

The diff function makes finding partial derivatives as easy as specifying the variable to differentiate in. This example computes the mixed partials of an expression in x and y:


In [ ]:
x,y = symbols("x,y")
ex = x^2*cos(y)
Sym[diff(ex,v1, v2) for v1 in [x,y], v2 in [x,y]]


Out[ ]:
\begin{bmatrix}2 \cos{\left (y \right )}&- 2 x \sin{\left (y \right )}\\- 2 x \sin{\left (y \right )}&- x^{2} \cos{\left (y \right )}\end{bmatrix}

The extra Sym, of the form T[], helps Julia resolve the type of the output.

Unevaluated derivatives

The Derivative function provides unevaluated derivatives, useful with differential equations and the output for unknown functions. Here is an example:


In [ ]:
ex = Derivative(exp(x*y), x, y, 2)


Out[ ]:
$$\frac{\partial^{3}}{\partial x\partial y^{2}} e^{x y}$$

(The y,2 is a replacement for y,y which makes higher order terms easier to type.) These expressions are evaluated with doit:


In [ ]:
doit(ex)


Out[ ]:
$$x \left(x y + 2\right) e^{x y}$$

Implicit derivatives

SymPy can be used to find derivatives of implicitly defined functions. For example, the task of finding $dy/dx$ for the equation:

$$ y^4 - x^4 -y^2 + 2x^2 = 0 $$

As with the mathematical solution, the key is to treat one of the variables as depending on the other. In this case, we think of $y$ locally as a function of $x$. SymPy allows us to create symbolic functions, and we will use one to substitute in for y.

In SymPy, symbolic functions use the class name "Function", but in SymPy we use SymFunction to avoid a name collision with one of Julia's primary types. The constructor can be used as SymFunction(:F):


In [ ]:
F, G = SymFunction("F"), SymFunction("G")


Out[ ]:
(SymPy.SymFunction(PyObject F,0),SymPy.SymFunction(PyObject G,0))

We can call these functions, but we get a function expression:


In [ ]:
F(x)


Out[ ]:
$$F{\left (x \right )}$$

SymPy can differentiate symbolically, again with diff:


In [ ]:
diff(F(x))


Out[ ]:
$$\frac{d}{d x} F{\left (x \right )}$$

Of for symbolic functions the more natural F'(x).

To get back to our problem, we have our expression:


In [ ]:
x,y = symbols("x, y")
ex = y^4 - x^4 - y^2 + 2x^2


Out[ ]:
$$- x^{4} + 2 x^{2} + y^{4} - y^{2}$$

Now we substitute:


In [ ]:
ex1 = ex(y=>F(x))


Out[ ]:
$$- x^{4} + 2 x^{2} + F^{4}{\left (x \right )} - F^{2}{\left (x \right )}$$

We want to differentiate "both" sides. As the right side is just $0$, there isn't anything to do here, but mentally keep track. As for the left we have:


In [ ]:
ex2 = diff(ex1, x)


Out[ ]:
$$- 4 x^{3} + 4 x + 4 F^{3}{\left (x \right )} \frac{d}{d x} F{\left (x \right )} - 2 F{\left (x \right )} \frac{d}{d x} F{\left (x \right )}$$

Now we collect terms and solve in terms of $F'(x)$


In [ ]:
ex3 = solve(ex2, F'(x))[1]


Out[ ]:
$$\frac{2 x \left(x^{2} - 1\right)}{\left(2 F^{2}{\left (x \right )} - 1\right) F{\left (x \right )}}$$

Finally, we substitute back into the solution for $F(x)$:


In [ ]:
ex4 = ex3(F(x) => y)


Out[ ]:
$$\frac{2 x \left(x^{2} - 1\right)}{y \left(2 y^{2} - 1\right)}$$
Example: A Norman Window

A classic calculus problem is to maximize the area of a Norman window (in the shape of a rectangle with a half circle atop) when the perimeter is fixed to be $P \geq 0$.

Label the rectangle with $w$ and $h$ for width and height and then the half circle has radius $r=w/2$. With this, we can see that the area is $wh+(1/2)\pi r^2$ and the perimeter is $w + 2h + \pi r$. This gives:


In [ ]:
w, h, P = symbols("w, h, P", nonnegative=true)
r = w/2
A = w*h + 1//2 * (pi * r^2)
p = w + 2h + pi*r


Out[ ]:
$$2 h + w + \frac{\pi w}{2}$$

(There is a subtlety above, as m 1//2*pi*r^2 will lose exactness, as the products will be done left to right, and 1//2*pi will be converted to an approximate floating point value before multiplying r^2, as such we rewrite the terms. It may be easier to use PI instead of pi.)

We want to solve for h from when p=P (our fixed value) and substitute back into A. We solve P-p==0:


In [ ]:
h0 =  solve(P-p, h)[1]
A1 = A(h => h0)


Out[ ]:
$$\frac{\pi w^{2}}{8} + w \left(\frac{P}{2} - \frac{\pi w}{4} - \frac{w}{2}\right)$$

Now we note this is a parabola in w, so any maximum will be an endpoint or the vertex, provided the leading term is negative. The leading term can be found through:


In [ ]:
coeffs(Poly(A1, w))


Out[ ]:
2-element Array{Any,1}:
 -1/2 - pi/8
         P/2

Or without using the Poly methods, we could do this:


In [ ]:
coeff(collect(expand(A1), w), w^2)


Out[ ]:
$$- \frac{1}{2} - \frac{\pi}{8}$$

Either way, the leading coefficient, $-1/2 - \pi/8$, is negative, so the maximum can only happen at an endpoint or the vertex of the parabola. Now we check that when $w=0$ (the left endpoint) the area is $0$:


In [ ]:
A1(w => 0)


Out[ ]:
$$0$$

The other endpoint is when $h=0$, or


In [ ]:
b = solve(subs(P-p, h, 0), w)[1]


Out[ ]:
$$\frac{2 P}{2 + \pi}$$

We will need to check the area at b and at the vertex.

To find the vertex, we can use calculus – it will be when the derivative in w is $0$:


In [ ]:
c = solve(diff(A1, w), w)[1]


Out[ ]:
$$\frac{2 P}{\pi + 4}$$

The answer will be the larger of A1 at b or c:


In [ ]:
atb = A1(w => b)
atc = A1(w => c)


Out[ ]:
$$\frac{\pi P^{2}}{2 \left(\pi + 4\right)^{2}} + \frac{2 P}{\pi + 4} \left(- \frac{\pi P}{2 \pi + 8} - \frac{P}{\pi + 4} + \frac{P}{2}\right)$$

A simple comparison isn't revealing:


In [ ]:
atc - atb


Out[ ]:
$$- \frac{\pi P^{2}}{2 \left(2 + \pi\right)^{2}} + \frac{\pi P^{2}}{2 \left(\pi + 4\right)^{2}} - \frac{2 P}{2 + \pi} \left(- \frac{\pi P}{4 + 2 \pi} - \frac{P}{2 + \pi} + \frac{P}{2}\right) + \frac{2 P}{\pi + 4} \left(- \frac{\pi P}{2 \pi + 8} - \frac{P}{\pi + 4} + \frac{P}{2}\right)$$

But after simplifying, we can see that this expression is positive if $P$ is:


In [ ]:
simplify(atc - atb)


Out[ ]:
$$\frac{2 P^{2}}{16 + \pi^{3} + 20 \pi + 8 \pi^{2}}$$

With this observation, we conclude the maximum area happens at c with area atc.

Integrals

Integration is implemented in SymPy through the integrate function. There are two basic calls: integrate(f(x), x) will find the indefinite integral ($\int f(x) dx$) and when endpoints are specified through integrate(f(x), (x, a, b)) the definite integral will be found ($\int_a^b f(x) dx$). The special form integrate(ex, x, a, b) can be used for single integrals, but the specification through a tuple is needed for multiple integrals.

Basic integrals are implemented:


In [ ]:
integrate(x^3, x)


Out[ ]:
$$\frac{x^{4}}{4}$$

Or in more generality:


In [ ]:
n = symbols("n", real=true)
ex = integrate(x^n, x)


Out[ ]:
$$\begin{cases} \log{\left (x \right )} & \text{for}\: n = -1 \\\frac{x^{n + 1}}{n + 1} & \text{otherwise} \end{cases}$$

The output here is a piecewise function, performing a substitution will choose a branch in this case:


In [ ]:
ex(n => 3)


Out[ ]:
$$\frac{x^{4}}{4}$$

Definite integrals are just as easy. Here is Archimedes' answer:


In [ ]:
integrate(x^2, (x, 0, 1))


Out[ ]:
$$\frac{1}{3}$$

Tedious problems, such as those needing multiple integration-by-parts steps can be done easily:


In [ ]:
integrate(x^5*sin(x), x)


Out[ ]:
$$- x^{5} \cos{\left (x \right )} + 5 x^{4} \sin{\left (x \right )} + 20 x^{3} \cos{\left (x \right )} - 60 x^{2} \sin{\left (x \right )} - 120 x \cos{\left (x \right )} + 120 \sin{\left (x \right )}$$

The SymPy tutorial says:

"integrate uses powerful algorithms that are always improving to compute both definite and indefinite integrals, including heuristic pattern matching type algorithms, a partial implementation of the Risch algorithm, and an algorithm using Meijer G-functions that is useful for computing integrals in terms of special functions, especially definite integrals."

The tutorial gives the following example:


In [ ]:
f(x) = (x^4 + x^2 * exp(x) - x^2 - 2x*exp(x) - 2x - exp(x)) * exp(x) / ( (x-1)^2 * (x+1)^2 * (exp(x) + 1) )
integrate(f(x), x)


Out[ ]:
$$\log{\left (e^{x} + 1 \right )} + \frac{e^{x}}{x^{2} - 1}$$

Multiple integrals

The integrate function uses a tuple, (var, a, b), to specify the limits of a definite integral. This syntax lends itself readily to multiple integration.

For example, the following computes the integral of $xy$ over the unit square:


In [ ]:
x, y = symbols("x,y")
integrate(x*y, (y, 0, 1), (x, 0, 1))


Out[ ]:
$$\frac{1}{4}$$

The innermost terms can depend on outer ones. For example, the following integrates $x^2y$ over the upper half of the unit circle:


In [ ]:
integrate(x^2*y, (y, 0, sqrt(1 - x^2)), (x, -1, 1))


Out[ ]:
$$\frac{2}{15}$$

Unevaluated integrals

The Integral function can stage unevaluated integrals that will be evaluated by calling doit. It is also used when the output is unknown. This example comes from the tutorial:


In [ ]:
integ = Integral(sin(x^2), x)


Out[ ]:
$$\int \sin{\left (x^{2} \right )}\, dx$$

In [ ]:
doit(integ)


Out[ ]:
$$\frac{3 \sqrt{2} \sqrt{\pi} S\left(\frac{\sqrt{2} x}{\sqrt{\pi}}\right)}{8 \Gamma{\left(\frac{7}{4} \right)}} \Gamma{\left(\frac{3}{4} \right)}$$

Operator version

For convenience, for univariate functions there is a convenience wrapper so that the operator styles – integrate(f) and integrate(f, a, b) – will perform the integrations.


In [ ]:
f(x) = exp(x) * cos(x)
integrate(f)


Out[ ]:
$$\frac{e^{x}}{2} \sin{\left (x \right )} + \frac{e^{x}}{2} \cos{\left (x \right )}$$

Or


In [ ]:
integrate(sin, 0, pi)


Out[ ]:
$$2$$

Taylor series

The series function can compute series expansions around a point to a specified order. For example, the following command finds 4 terms of the series expansion of exp(sin(x)) in x about $c=0$:


In [ ]:
s1 = series(exp(sin(x)), x, 0, 4)


Out[ ]:
$$1 + x + \frac{x^{2}}{2} + \mathcal{O}\left(x^{4}\right)$$

The coefficients are from the Taylor expansion ($a_i=f^{i}(c)/i!$). The big "O" term indicates that any other power is no bigger than a constant times $x^4$.

Consider what happens when we multiply series of different orders:


In [ ]:
s2 = series(cos(exp(x)), x, 0, 6)


Out[ ]:
$$\cos{\left (1 \right )} - x \sin{\left (1 \right )} + x^{2} \left(- \frac{1}{2} \sin{\left (1 \right )} - \frac{1}{2} \cos{\left (1 \right )}\right) - \frac{x^{3}}{2} \cos{\left (1 \right )} + x^{4} \left(- \frac{1}{4} \cos{\left (1 \right )} + \frac{5}{24} \sin{\left (1 \right )}\right) + x^{5} \left(- \frac{1}{24} \cos{\left (1 \right )} + \frac{23}{120} \sin{\left (1 \right )}\right) + \mathcal{O}\left(x^{6}\right)$$

In [ ]:
simplify(s1 * s2)


Out[ ]:
$$\cos{\left (1 \right )} + \sqrt{2} x \cos{\left (\frac{\pi}{4} + 1 \right )} - \frac{3 x^{2}}{2} \sin{\left (1 \right )} - \sqrt{2} x^{3} \sin{\left (\frac{\pi}{4} + 1 \right )} + \mathcal{O}\left(x^{4}\right)$$

The big "O" term is $x^4$, as smaller order terms in s2 are covered in this term. The big "O" notation is sometimes not desired, in which case the removeO function can be employed:


In [ ]:
removeO(s1)


Out[ ]:
$$\frac{x^{2}}{2} + x + 1$$

Sums

SymPy can do sums, including some infinite ones. The summation function performs this task. For example, we have


In [ ]:
i, n = symbols("i, n")
summation(i^2, (i, 1, n))


Out[ ]:
$$\frac{n^{3}}{3} + \frac{n^{2}}{2} + \frac{n}{6}$$

Like Integrate and Derivative, there is also a Sum function to stage the task until the doit function is called to initiate the sum.

Some famous sums can be computed:


In [ ]:
sn = Sum(1/i^2, (i, 1, n))
doit(sn)


Out[ ]:
$$\operatorname{harmonic}{\left (n,2 \right )}$$

And from this a limit is available:


In [ ]:
limit(doit(sn), n, oo)


Out[ ]:
$$\frac{\pi^{2}}{6}$$

This would have also been possible through summation(1/i^2, (i, 1, oo)).

Vector-valued functions

Julia makes constructing a vector of symbolic objects easy:


In [ ]:
x,y = symbols("x,y")
v = [1,2,x]
w = [1,y,3]


Out[ ]:
\begin{bmatrix}1\\y\\3\end{bmatrix}

The generic definitions of vector operations will work as expected with symbolic objects:


In [ ]:
dot(v,w)


Out[ ]:
$$2 y + 3 \overline{x} + 1$$

Or


In [ ]:
cross(v,w)


Out[ ]:
\begin{bmatrix}- x y + 6\\x - 3\\y - 2\end{bmatrix}

Finding gradients can be done using a comprehension.


In [ ]:
ex = x^2*y - x*y^2
Sym[diff(ex,var) for var in [x,y]]


Out[ ]:
\begin{bmatrix}2 x y - y^{2}\\x^{2} - 2 x y\end{bmatrix}

The mixed partials is similarly done by passing two variables to differentiate in to diff:


In [ ]:
Sym[diff(ex, v1, v2) for v1 in [x,y], v2 in [x,y]]


Out[ ]:
\begin{bmatrix}2 y&2 \left(x - y\right)\\2 \left(x - y\right)&- 2 x\end{bmatrix}

For this task, SymPy provides the hessian function:


In [ ]:
hessian(ex)


Out[ ]:
\begin{bmatrix}2 y&2 x - 2 y\\2 x - 2 y&- 2 x\end{bmatrix}

(When there are symbolic parameters, the free variables are specified as a vector, as in hessian(ex, vars).)

Matrices

SymPy has a special class to work with matrices, as does Julia. With SymPy, matrices are just Julian matrices with symbolic entries. The conversion to matrices that SymPy knows about is primarily handled in the background, though, if need be, convert(SymMatrix, M) can be used.

Constructing matrices then follows Julia's conventions:


In [ ]:
x,y = symbols("x,y")
M = [1 x; x 1]


Out[ ]:
\begin{bmatrix}1&x\\x&1\end{bmatrix}

As much as possible, generic Julia functions are utilized:


In [ ]:
diagm(ones(Sym, 5))
M^2
det(M)


Out[ ]:
$$- x^{2} + 1$$

Occasionally, the SymPy method has more content:


In [ ]:
rref(M)


Out[ ]:
\begin{bmatrix}1&0\\0&1\end{bmatrix}

As compared to SymPy's rref which has a second list of indices used for pivoting:


In [ ]:
M[:rref]()


Out[ ]:
(SymPy.SymMatrix(PyObject Matrix([
[1, 0],
[0, 1]])),Any[0,1])

(Similarly, eigvecs(M) is less informative than M[:eigenvecs]().)

This example from the tutorial shows the nullspace function:


In [ ]:
M = [one(Sym) 2 3 0 0; 4 10 0 0 1]
vs = nullspace(M)


Out[ ]:
3-element Array{Any,1}:
 [-15,6,1,0,0] 
 [0,0,0,1,0]   
 [1,-1/2,0,0,1]

And this shows that they are indeed in the null space of M:


In [ ]:
[M*vs[i] for i in 1:3]


Out[ ]:
3-element Array{Any,1}:
 [0,0]
 [0,0]
 [0,0]

Symbolic expressions can be included in the matrices:


In [ ]:
M = [1 x; x 1]
P, D = diagonalize(M)  # M = PDP^-1
D


Out[ ]:
\begin{bmatrix}- x + 1&0\\0&x + 1\end{bmatrix}

Differential equations

SymPy has facilities for solving ordinary differential equations. The key is to create a symbolic function expression using SymFunction. Again, this may be done through:


In [ ]:
F = SymFunction("F")


Out[ ]:

With this, we can construct a differential equation. Following the SymPy tutorial, we solve $f''(x) - 2f'(x) + f(x) = \sin(x)$:


In [ ]:
diffeq = Eq(diff(F(x), x, 2) - 2*diff(F(x)) + F(x), sin(x))


Out[ ]:
$$F{\left (x \right )} - 2 \frac{d}{d x} F{\left (x \right )} + \frac{d^{2}}{d x^{2}} F{\left (x \right )} = \sin{\left (x \right )}$$

With this, we just need the dsolve function. This is called as dsolve(eq):


In [ ]:
ex = dsolve(diffeq)


Out[ ]:
$$F{\left (x \right )} = \left(C_{1} + C_{2} x\right) e^{x} + \frac{1}{2} \cos{\left (x \right )}$$

The dsolve function in SymPy has an extensive list of named arguments to control the underlying algorithm. These can be passed through with the appropriate keyword arguments.

More clearly, the SymFunction objects have the ' method defined to find a derivative, so the above could also have been:


In [ ]:
diffeq = F''(x) - 2F'(x) + F(x) - sin(x)
dsolve(diffeq)


Out[ ]:
$$F{\left (x \right )} = \left(C_{1} + C_{2} x\right) e^{x} + \frac{1}{2} \cos{\left (x \right )}$$

This solution has two constants, $C_1$ and $C_2$, that would be found from initial conditions. Say we know $F(0)=0$ and $F'(0)=1$, can we find the constants? To work with the returned expression, it is most convenient to get just the right hand side. The rhs function will return the right-hand side of a relation:


In [ ]:
ex1 = rhs(ex)


Out[ ]:
$$\left(C_{1} + C_{2} x\right) e^{x} + \frac{1}{2} \cos{\left (x \right )}$$

(The args function also can be used to break up the expression into parts.)

With this, we can solve for C1 through substituting in $0$ for $x$:


In [ ]:
solve(ex1(x => 0), Sym("C1"))


Out[ ]:
\begin{bmatrix}- \frac{1}{2}\end{bmatrix}

We see that $C1=-1/2$, which we substitute in:


In [ ]:
ex2 = ex1(Sym("C1") => -1//2)


Out[ ]:
$$\left(C_{2} x - \frac{1}{2}\right) e^{x} + \frac{1}{2} \cos{\left (x \right )}$$

We know that $F'(0)=1$ now, so we solve for C2 through


In [ ]:
solve( subs(diff(ex2, x), x, 0) - 1, Sym("C2") )


Out[ ]:
\begin{bmatrix}\frac{3}{2}\end{bmatrix}

This gives C2=3/2. Again we substitute in to get our answer:


In [ ]:
ex3 = ex2(Sym("C2") => 3//2)


Out[ ]:
$$\left(\frac{3 x}{2} - \frac{1}{2}\right) e^{x} + \frac{1}{2} \cos{\left (x \right )}$$
Example

We do one more example, this one borrowed from here.

Find the variation of speed with time of a parachutist subject to a drag force of $k\cdot v^2$.

The equation is

$$ \frac{m}{k} \frac{dv}{dt} = \alpha^2 - v^2. $$

We proceed through:


In [ ]:
t, m,k,alpha = symbols("t,m,k,alpha")
v = SymFunction("v")
ex = Eq( (m/k)*v'(t), alpha^2 - v(t)^2 )


Out[ ]:
$$\frac{m}{k} \frac{d}{d t} v{\left (t \right )} = \alpha^{2} - v^{2}{\left (t \right )}$$

We can "classify" this ODE with the method classify_ode. As this is not exported, we call it using indexing:


In [ ]:
ex[:classify_ode]()


Out[ ]:
("separable","1st_power_series","lie_group","separable_Integral")

It is linear, but not solvable. Proceeding with dsolve gives:


In [ ]:
dsolve(ex)


Out[ ]:
$$v{\left (t \right )} = - \frac{\alpha}{\tanh{\left (\frac{\alpha k}{m} \left(C_{1} - t\right) \right )}}$$

Initial Value Problems

Solving an initial value problem can be a bit tedious with SymPy. The first example shows the steps. This is because the ics argument for dsolve only works for a few types of equations. These do not include, by default, the familiar "book" examples, such as $y'(x) = a\cdot y(x)$.

To work around this, SymPy.jl extends the function dsolve to allow a specification of the initial conditions when solving. The new ingredients are the independent variable (x in the examples) and tuples to specify each condition. The are conditions on the values of u, u'', .... To illustrate, we follow an example from Wolfram.


In [ ]:
y = SymFunction("y")
a, x = symbols("a,x")
eqn = y'(x) - 3*x*y(x) - 1


Out[ ]:
$$- 3 x y{\left (x \right )} + \frac{d}{d x} y{\left (x \right )} - 1$$

We solve the initial value problem with $y(0) = 4$ as follows:


In [ ]:
x0, y0 = 0, 4
out = dsolve(eqn, x, (y, x0, y0))


Out[ ]:
$$y{\left (x \right )} = \left(\frac{\sqrt{6} \sqrt{\pi}}{6} \operatorname{erf}{\left (\frac{\sqrt{6} x}{2} \right )} + 4\right) e^{\frac{3 x^{2}}{2}}$$

Verifying this requires combining some operations:


In [ ]:
u = rhs(out)
diff(u, x) - 3*x*u - 1


Out[ ]:
$$0$$

To solve with a general initial condition is similar:


In [ ]:
x0, y0 = 0, a
out = dsolve(eqn, x, (y, x0, y0))


Out[ ]:
$$y{\left (x \right )} = \left(a + \frac{\sqrt{6} \sqrt{\pi}}{6} \operatorname{erf}{\left (\frac{\sqrt{6} x}{2} \right )}\right) e^{\frac{3 x^{2}}{2}}$$

To plot this over a range of values for a we have:


In [ ]:
as = -2:0.6:2
ex = rhs(out)
p = plot(ex(a=>as[1]), -1.8, 1.8, ylims=(-4, 4))
for i in as[2:end]
  plot!(p, ex(a=>i), -1.8, 1.8, ylims=(-4, 4))
end
p


Out[ ]:

The comment from the example is "This plots several integral curves of the equation for different values of $a$. The plot shows that the solutions have an inflection point if the parameter lies between $-1$ and $1$ , while a global maximum or minimum arises for other values of $a$."

Example

We continue with another example from the Wolfram documentation, that of solving $y'' + 5y' + 6y=0$ with values prescribed for both $y$ and $y'$ at $x_0=0$.


In [ ]:
y = SymFunction("y")
x = symbols("x")
eqn = y''(x) + 5y'(x) + 6y(x)


Out[ ]:
$$6 y{\left (x \right )} + 5 \frac{d}{d x} y{\left (x \right )} + \frac{d^{2}}{d x^{2}} y{\left (x \right )}$$

To solve with $y(0) = 1$ and $y'(0) = 1$ we have:


In [ ]:
out = dsolve(eqn, x, (y, 0, 1), (y', 0, 1))


Out[ ]:
$$y{\left (x \right )} = \left(4 - 3 e^{- x}\right) e^{- 2 x}$$

To make a plot, we only need the right-hand-side of the answer:


In [ ]:
plot(rhs(out), -1/3, 2)


Out[ ]:
Example

Boundary value problems can be solved for as well through a similar syntax. Continuing with examples from the Wolfram page, we solve $y''(x) +y(x) = e^x$ over $[0,1]$ with conditions $y(0)=1$, $y(1) = 1/2$:


In [ ]:
eqn = y''(x) + y(x) - exp(x)
dsolve(eqn, x, (y, 0, 1), (y, 1, 1//2))


Out[ ]:
$$y{\left (x \right )} = \frac{e^{x}}{2} + \frac{\sin{\left (x \right )}}{2 \sin{\left (1 \right )}} \left(- e - \cos{\left (1 \right )} + 1\right) + \frac{1}{2} \cos{\left (x \right )}$$