Example 1.1.2

Recall the grade-school approximation to the number $\pi$.


In [1]:
p = 22/7


Out[1]:
3.142857142857143

Note that not all the digits displayed for p are the same as for $\pi$. As an approximation, its absolute and relative accuracy are


In [2]:
print("absolute accuracy: ",abs(p-pi))


absolute accuracy: 0.0012644892673496777

In [3]:
rel_accuracy = abs(p-pi)/pi
print("relative accuracy: ",rel_accuracy)


relative accuracy: 0.0004024994347707008

In [4]:
print("accurate digits: ",-log(10,rel_accuracy))


accurate digits: 3.395234725174716

Example 1.1.3

There is no double precision number between $1$ and $1+\varepsilon_\text{mach}$. Thus the following difference is zero despite its appearance.


In [5]:
e = eps()/2
(1.0 + e) - 1.0


Out[5]:
0.0

However, $1-\varepsilon_\text{mach}/2$ is a double precision number, so it and its negative are represented exactly:


In [6]:
1.0 + (e - 1.0)


Out[6]:
1.1102230246251565e-16

This is now the "correct" result. But we have found a rather shocking breakdown of the associative law of addition!

Example 1.3.2

Here we show how to use horner to evaluate a polynomial. It's not a part of core Julia, so we need to load it first. All functions for this text are loaded using the following line.


In [7]:
include("FNC.jl");

Now we define a vector of the coefficients of $p(x)=(x-1)^3=x^3-3x^2+3x-1$, in descending degree order.


In [8]:
c = [1,-3,3,-1]


Out[8]:
4-element Array{Int64,1}:
  1
 -3
  3
 -1

In order to avoid clashes between similarly named functions, Julia has sandboxed all the book functions into a namespace called FNC. We use this namespace whenever we invoke one of the functions.


In [9]:
FNC.horner(c,1.6)


Out[9]:
0.21600000000000041

The above is the value (up to roundoff) of $p(1.6)$. While it does lead to a little extra typing, a nice side effect of using the namespace paradigm is that if you type FNC. (including the period) and hit the TAB key, you will see a list of all the functions known in that namespace.

Example 1.3.3


In [10]:
a = 1;  b = -(1e6+1e-6);  c = 1;

In [11]:
x1 = (-b + sqrt(b^2-4*a*c)) / (2*a)


Out[11]:
1.0e6

In [12]:
x2 = (-b - sqrt(b^2-4*a*c)) / (2*a)


Out[12]:
1.00000761449337e-6

The first value is correct to all stored digits, but the second has fewer than six accurate digits:


In [13]:
-log(10, abs(1e-6-x2)/1e-6 )


Out[13]:
5.118358987126217

Example 1.3.4


In [14]:
a = 1;  b = -(1e6+1e-6);  c = 1;

First, we find the "good" root using the quadratic forumla.


In [15]:
x1 = (-b + sqrt(b^2-4*a*c)) / (2*a)


Out[15]:
1.0e6

Then we use the alternative formula for computing the other root.


In [16]:
x2 = c/(a*x1)


Out[16]:
1.0e-6

Example 1.3.5

For this example we will use a publicly available package for working with polynomials. It should be available using the following line, if you have followed installation instructions for these scripts.


In [17]:
using Polynomials

Our first step is to construct a polynomial with six known roots.


In [18]:
r = [-2.0,-1,1,1,3,6]
p = poly(r)


Out[18]:
36.0 - 36.0∙x - 43.0∙x^2 + 44.0∙x^3 + 6.0∙x^4 - 8.0∙x^5 + 1.0∙x^6

Now we use a standard numerical method for finding those roots, pretending that we don't know them already.


In [19]:
r_computed = sort(roots(p))


Out[19]:
6-element Array{Float64,1}:
 -2.0000000000000013
 -0.999999999999999 
  0.9999999902778504
  1.0000000097221495
  2.9999999999999996
  5.999999999999992 

Here are the relative errors in each of the computed roots. The @. notation at the start means essentially to do the given operations on each element of the given vectors.


In [20]:
@. abs(r - r_computed) / r


Out[20]:
6-element Array{Float64,1}:
 -6.661338147750939e-16 
 -9.992007221626409e-16 
  9.722149640900568e-9  
  9.722149529878266e-9  
  1.4802973661668753e-16
  1.3322676295501878e-15

It seems that the forward error is acceptably close to machine epsilon for double precision in all cases except the double root at $x=1$. This is not a surprise, though, given the poor conditioning at such roots.

Let's consider the backward error. The data in the rootfinding problem are the polynomial coefficients. We can apply poly to find the coefficients of the polynomial (that is, the data) whose roots were actually computed by the numerical algorithm.


In [21]:
p_computed = poly(r_computed)


Out[21]:
35.99999999999993 - 35.999999999999915∙x - 42.99999999999997∙x^2 + 43.99999999999998∙x^3 + 5.999999999999973∙x^4 - 7.999999999999991∙x^5 + 1.0∙x^6

We find that in a relative sense, these coefficients are very close to those of the original, exact polynomial:


In [22]:
cp = coeffs(p)
cpc = coeffs(p_computed)
@. abs(cp-cpc)/cp


Out[22]:
7-element Array{Float64,1}:
  1.973729821555834e-15 
 -2.3684757858670005e-15
 -6.609699867535816e-16 
  4.844609562000683e-16 
  4.440892098500626e-15 
 -1.1102230246251565e-15
  0.0                   

In summary, even though there are some computed roots relatively far from their correct values, they are nevertheless the roots of a polynomial that is very close to the original.