In Greg's original post he specifically references the Python 3 version of the n-body benchmark. In particular, he asks how to test the advance
function.
There's a number of useful and important comments underneath the original post, in particular this thread on details of the algorithm and the problems with the code, but here we'll start from scratch.
Let's reproduce that code here (please note the OSI BSD license).
In [1]:
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
In [2]:
# The Computer Language Benchmarks Game
# http://benchmarksgame.alioth.debian.org/
#
# originally by Kevin Carson
# modified by Tupteq, Fredrik Johansson, and Daniel Nanz
# modified by Maciej Fijalkowski
# 2to3
import sys
def combinations(l):
result = []
for x in range(len(l) - 1):
ls = l[x+1:]
for y in ls:
result.append((l[x],y))
return result
PI = 3.14159265358979323
SOLAR_MASS = 4 * PI * PI
DAYS_PER_YEAR = 365.24
BODIES = {
'sun': ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0], SOLAR_MASS),
'jupiter': ([4.84143144246472090e+00,
-1.16032004402742839e+00,
-1.03622044471123109e-01],
[1.66007664274403694e-03 * DAYS_PER_YEAR,
7.69901118419740425e-03 * DAYS_PER_YEAR,
-6.90460016972063023e-05 * DAYS_PER_YEAR],
9.54791938424326609e-04 * SOLAR_MASS),
'saturn': ([8.34336671824457987e+00,
4.12479856412430479e+00,
-4.03523417114321381e-01],
[-2.76742510726862411e-03 * DAYS_PER_YEAR,
4.99852801234917238e-03 * DAYS_PER_YEAR,
2.30417297573763929e-05 * DAYS_PER_YEAR],
2.85885980666130812e-04 * SOLAR_MASS),
'uranus': ([1.28943695621391310e+01,
-1.51111514016986312e+01,
-2.23307578892655734e-01],
[2.96460137564761618e-03 * DAYS_PER_YEAR,
2.37847173959480950e-03 * DAYS_PER_YEAR,
-2.96589568540237556e-05 * DAYS_PER_YEAR],
4.36624404335156298e-05 * SOLAR_MASS),
'neptune': ([1.53796971148509165e+01,
-2.59193146099879641e+01,
1.79258772950371181e-01],
[2.68067772490389322e-03 * DAYS_PER_YEAR,
1.62824170038242295e-03 * DAYS_PER_YEAR,
-9.51592254519715870e-05 * DAYS_PER_YEAR],
5.15138902046611451e-05 * SOLAR_MASS) }
SYSTEM = list(BODIES.values())
PAIRS = combinations(SYSTEM)
def advance(dt, n, bodies=SYSTEM, pairs=PAIRS):
for i in range(n):
for (([x1, y1, z1], v1, m1),
([x2, y2, z2], v2, m2)) in pairs:
dx = x1 - x2
dy = y1 - y2
dz = z1 - z2
mag = dt * ((dx * dx + dy * dy + dz * dz) ** (-1.5))
b1m = m1 * mag
b2m = m2 * mag
v1[0] -= dx * b2m
v1[1] -= dy * b2m
v1[2] -= dz * b2m
v2[0] += dx * b1m
v2[1] += dy * b1m
v2[2] += dz * b1m
for (r, [vx, vy, vz], m) in bodies:
r[0] += dt * vx
r[1] += dt * vy
r[2] += dt * vz
Now, there's a number of issues with this code, as noted in comments on the original post. Instead of addressing those here, I'll point to Konrad Hinsen's upcoming article on testing (to appear in CiSE in July), or Scopatz and Huff's book, both of which discuss n-body codes, testing, and how to code them effectively. Also, we're only interested in testing whether the implementation is "correct", not the limitations of that implementation for modelling the physics: Konrad Hinsen points me to this paper by Boekholt and Portegies Zwart on testing n-body code for their reliability.
In previous notebooks we compared the results of the algorithm with the expected behaviour within the context of a model. The model only considered the behaviour of the numerical algorithm: we took it for granted that the correct physics had been implemented. In this case, the advance
function is doing two things, neither of which are immediately obvious:
In testing whether the results are close enough, we need to test within the context of two models: we need to check that the correct physics is implemented, and we need to check that the numerical evolution behaves as expected.
The physics that we're modelling is the classical n body problem of massive particles interacting only through gravity. The general formulation gives the equations of motion
$$ \begin{equation} m_i \frac{d^2 \vec{r}_i}{d t^2} = \sum_{j \ne i} \frac{ G m_i m_j (\vec{r}_j - \vec{r}_i) }{|\vec{r}_j - \vec{r}_i|^3} \end{equation} $$where $m_i, \vec{r}_i$ are the masses and positions of the $i^{\text{th}}$ particle. This model is Newton's laws (which are equivalent to the conservation of energy and momentum) plus Newton's law of gravity.
The numerical method we need to model uses the semi-implicit Euler method1 to update in time, and a direct particle-particle method to compute the spatial interactions. The semi-implicit Euler method is first order accurate (with respect to the timestep $\Delta t$), and the particle-particle interactions are exact.
That is, we can think of the algorithm as solving the ODE
$$ \begin{equation} \frac{d y}{dt} = F(t, y) \end{equation} $$where the vector $y$ contains the positions $\vec{r}$ and velocities $\vec{v}$ of each body. Due to the physics of the problem (ie, there's a total energy that we want to conserve) there's an advantage in considering the positions and velocities separately, but when it comes to measure the error we don't need to do that: we just treat the advance
function as an algorithm that takes the full vector $y_n$ as input and returns $y_{n+1}$.
What this means in practical terms is that we expect the error in time to have similar convergence rate behaviour to the standard Euler method discussed before: the convergence rate should be $1$, which can be checked both by the self-convergence being within $0.585$ and $1.585$, and also that the error in the convergence rate goes down by at least $1/3$ as the base resolution of $\Delta t$ is reduced by $2$. Checking the local truncation error would require more detailed analysis, but in principle should follow the same steps as before.
What's more interesting here is the additional checks from the complexities of the particle-particle interaction. We can use our previous techniques to check that the time update algorithm is behaving as expected. However, the particle-particle interaction itself contains many steps, and those also need testing.
Let's check the output on (a copy of) the default input, taking a single step.
In [3]:
import copy
bodies = copy.deepcopy(SYSTEM)
advance(dt=0.01, n=1, bodies=bodies, pairs=combinations(bodies))
print(bodies)
print(SYSTEM)
Comparing these two is going to be a real pain. So, let's create a utility function that will give the difference between two configurations of bodies. It will return the differences in the locations and the velocities as arrays, but will also include the "total" error as a single number - the sum of the absolute values of all errors. If this norm is zero then the two configurations are identical.
In [4]:
def differences(bodies1, bodies2):
"""
Compare two configurations.
"""
assert len(bodies1) == len(bodies2), "Configurations must have same number of bodies! {}, {}".format(len(bodies1), len(bodies2))
N = len(bodies1)
d_positions = numpy.zeros((N, 3))
d_velocities = numpy.zeros((N, 3))
norm_difference = 0.0
for n in range(N):
d_positions[n, :] = numpy.array(bodies1[n][0]) - numpy.array(bodies2[n][0])
d_velocities[n, :] = numpy.array(bodies1[n][1]) - numpy.array(bodies2[n][1])
norm_difference += numpy.sum(numpy.abs(d_positions[n, :])) + numpy.sum(numpy.abs(d_velocities[n, :]))
return norm_difference, d_positions, d_velocities
In [5]:
norm1, d_x, d_v = differences(bodies, SYSTEM)
In [6]:
d_v
Out[6]:
Now we have to think about how to test the particle-particle interaction. The parameter we can change in order to do the test is the configuration bodies
(and the associated list of pairs
, which we'll always take to be combinations(bodies)
).
So we can either change the positions, or the velocities, of the bodies, or their masses (or some combination of the three). What changes make meaningful tests?
Well, the coordinates chosen should not matter, if we change them in a self-consistent way, and interpret the results appropriately. For example, if we ran time backwards by changing the direction of time (dt
$\to$ -dt
) and changing the sign of the velocities, then the results should be identical (after re-flipping the direction of time).
In [7]:
def flip_time(bodies):
"""
Flip the time by flipping the velocity.
"""
for i in range(len(bodies)):
for j in range(3):
bodies[i][1][j] *= -1.0
In [8]:
bodies_flip_time = copy.deepcopy(SYSTEM)
flip_time(bodies_flip_time)
advance(dt=-0.01, n=1, bodies=bodies_flip_time, pairs=combinations(bodies_flip_time))
flip_time(bodies_flip_time)
norm_flip_time, d_x_flip_time, d_v_flip_time = differences(bodies, bodies_flip_time)
print("Norm of differences is {}.".format(norm_flip_time))
As expected, the positions are identical. Note that the velocities are significantly different, because they still have the opposite sign, but if we flip the velocities back as well, then they will be identical.
Why did we do this? Because one of the first steps in debugging, and hence in testing, should be to check the conservation laws. This is just an example of a conservation law resulting from a symmetry: the consequence of Emmy Noether's famous theorem. There is a detailed list of symmetries that a physical theory could have, each of which has an associated conservation law. Not all physical models obey all symmetries, and not every numerical simulation obeys all the symmetries of the physical model (maybe due to boundary conditions, or numerical approximations). However, in the n-body case we expect a large number of these symmetries to hold, which we can check.
These include:
In the first test where we "ran time in reverse" we were checking discrete time symmetry. We can now check the other symmetries systematically.
We should be able to flip individual coordinates: if $y \to -y$ and $v_y \to -v_y$, for example, then the evolution of the other coordinates and velocities should be unaffected (and if this flip is reversed after evolution, the position should be identical).
In [9]:
def flip_coordinate(bodies, coord):
"""
Flip a single coordinate direction
"""
for i in range(len(bodies)):
bodies[i][0][coord] *= -1.0
bodies[i][1][coord] *= -1.0
In [10]:
for coord in range(3):
bodies_flip_coord = copy.deepcopy(SYSTEM)
flip_coordinate(bodies_flip_coord, coord)
advance(dt=0.01, n=1, bodies=bodies_flip_coord, pairs=combinations(bodies_flip_coord))
flip_coordinate(bodies_flip_coord, coord)
norm_flip_coord, d_x_flip_coord, d_v_flip_coord = differences(bodies, bodies_flip_coord)
print("Norm of differences is {} (flipped coordinate {}).".format(norm_flip_coord, coord))
Next, note that only the separation between bodies is meant to matter. So if we arbitrarily translate all coordinates by some amount whilst leaving the velocities untouched, and reverse this after evolution, the result should be identical.
In [11]:
def translate_coordinate(bodies, shift):
"""
Translate the coordinates of all bodies
"""
for i in range(len(bodies)):
for n in range(3):
bodies[i][0][n] += shift[n]
In [12]:
shift = 10.0*(-0.5+numpy.random.rand(3)) # Random coordinate shift in [-5, 5]
bodies_shift = copy.deepcopy(SYSTEM)
translate_coordinate(bodies_shift, shift)
advance(dt=0.01, n=1, bodies=bodies_shift, pairs=combinations(bodies_shift))
translate_coordinate(bodies_shift, -shift)
norm_shift, d_x_shift, d_v_shift = differences(bodies, bodies_shift)
print("Norm of differences is {}.".format(norm_shift))
In this case the repeated operations introduce some floating point round-off error. This should be related to the round off introduced by the shift, and bounded by the number of bodies ($5$) multiplied by the total round-off introduced:
In [13]:
numpy.sum(numpy.abs(numpy.spacing(shift)))*5
Out[13]:
The precise differences are:
In [14]:
print(d_x_shift)
print(d_v_shift)
We see that the some of the bodies have errors around the expected floating point limit, and these dominate the resulting error (which is well within the expected bound).
Yet another coordinate transformation can be tried without affecting the results: a rotation. We'll pick three random Euler angles and set up the rotation matrix. In general, we should do a translation first to ensure no body is at the origin (here, the Sun will be), to avoid specializing the problem, but given the generality of the code that's a very minor worry.
In [15]:
def rotate_bodies(bodies, angles, invert=False):
"""
Rotate the coordinates of all bodies.
"""
Rx = numpy.array([[1.0, 0.0, 0.0],
[0.0, numpy.cos(angles[0]), -numpy.sin(angles[0])],
[0.0, numpy.sin(angles[0]), numpy.cos(angles[0])]])
Ry = numpy.array([[numpy.cos(angles[1]), 0.0, numpy.sin(angles[1])],
[0.0, 1.0, 0.0],
[-numpy.sin(angles[1]), 0.0, numpy.cos(angles[1])]])
Rz = numpy.array([[numpy.cos(angles[2]), -numpy.sin(angles[2]), 0.0],
[numpy.sin(angles[2]), numpy.cos(angles[2]), 0.0],
[0.0, 0.0, 1.0]])
if invert:
R = numpy.dot(numpy.dot(Rx, Ry), Rz)
else:
R = numpy.dot(Rz, numpy.dot(Ry, Rx))
for i in range(len(bodies)):
x = numpy.array(bodies[i][0])
v = numpy.array(bodies[i][1])
xp = numpy.dot(R, x)
vp = numpy.dot(R, v)
for n in range(3):
bodies[i][0][n] = xp[n]
bodies[i][1][n] = vp[n]
In [16]:
angles = numpy.pi/4.0*numpy.random.rand(3) # Random Euler angles in [0, pi/4]
bodies_rotate = copy.deepcopy(SYSTEM)
rotate_bodies(bodies_rotate, angles)
advance(dt=0.01, n=1, bodies=bodies_rotate, pairs=combinations(bodies_rotate))
rotate_bodies(bodies_rotate, -angles, invert=True)
norm_rotate, d_x_rotate, d_v_rotate = differences(bodies, bodies_rotate)
print("Norm of differences is {}.".format(norm_rotate))
The size of the differences is considerably larger, thanks to the number of operations performed. With three rotation matrices combined, each containing nine entries, applied to five bodies, we would expect a total error of the order of:
In [17]:
3*9*5*numpy.sum(numpy.abs(numpy.spacing(angles)))
Out[17]:
We could also try scaling the coordinates and the masses. As the force goes as $M^2/L^2$, if we scale time, position and mass by the same amount then the results should not change. Note that because we scale both length and time, the velocity does not change.
In [18]:
def scale_bodies(bodies, scale):
"""
Scale coordinates and masses.
"""
bodies_scale = []
for (x, v, m) in bodies:
new_x = copy.deepcopy(x)
new_v = copy.deepcopy(v)
new_m = m * scale
for i in range(3):
new_x[i] *= scale
bodies_scale.append((new_x, new_v, new_m))
return bodies_scale
In [19]:
scale = 2.0
bodies_scale = scale_bodies(SYSTEM, scale)
advance(dt=0.02, n=1, bodies=bodies_scale, pairs=combinations(bodies_scale))
bodies_rescale = scale_bodies(bodies_scale, 1.0/scale)
norm_scale, d_x_scale, d_v_scale = differences(bodies, bodies_rescale)
print("Norm of differences is {}.".format(norm_scale))
This scaling does not prove that the force goes as $M^2/L^2$; it only shows that the force contains $M$ and $L$ to the same power. To show that it's got the appropriate form we should either compare to an oracle or test simpler cases. The Java n-body code is (likely) the oracle it was tested against; testing against a simpler case will be done later.
Above, we noted that continuous time translation (which corresponds to conservation of energy) may not be perfectly retained by our numerical method, even though it is perfectly preserved by the theory. However, as we're solving an ODE in time, we can use the techniques applied to the phugoid model to ensure the numerical method is converging at the expected rate. As we're expecting the method to converge at first order, we need the error in the measured self-convergence rate to reduce by at least $1/3$ when the resolution (in time) is reduced by $2$.
In [20]:
T = 10.0 # The base resolution will take 1000 steps
dt_values = numpy.array([0.01*2**(-i) for i in range(4)])
bodies_list = []
for i, dt in enumerate(dt_values):
bodies_loop = copy.deepcopy(SYSTEM)
advance(dt=dt, n=int(T/dt), bodies=bodies_loop, pairs=combinations(bodies_loop))
bodies_list.append(bodies_loop)
In [21]:
convergence_rate = numpy.zeros((2,))
for i in range(len(convergence_rate)):
numerator, diff_x, diff_v= differences(bodies_list[i], bodies_list[i+1])
denominator, diff_x, diff_v = differences(bodies_list[i+1], bodies_list[i+2])
convergence_rate[i] = numpy.log(numerator/denominator)/numpy.log(2.0)
print("Convergence rate (base dt={}) is {} (error {}).".format(
dt_values[i], convergence_rate[i], numpy.abs(convergence_rate[i]-1.0)))
print("Is the convergence rate close enough to 1 for the answers to match? {}".format(
numpy.log(1.0+0.5)/numpy.log(2.0) < convergence_rate[1] < numpy.log(2.0**2-1.0)/numpy.log(2.0)))
print("Does the convergence of the convergence rate show it's close enough to 1? {}".format(
numpy.abs(convergence_rate[1]-1.0) < 2.0/3.0*numpy.abs(convergence_rate[0]-1.0)))
This shows that the time evolution is close enought to first order as expected. We haven't explicitly shown that it's the semi-implicit Euler method, as explicitly calculating the local truncation error would be a lot of work, and in this case we're really not interested in the specific time integration scheme - just that it converges.
In order to check that we've really implemented Newton's law of gravity, we need to test a very simplified situation where we can predict the precise answer. Let's set up a two body problem starting from rest.
Once we've done that, we can vary one parameter and check that the acceleration behaves as expected. The two cases we have are varying a mass, so that if $m_1 \to b m_1$ then
$$ \begin{equation} \vec{a}_2(b) = b \vec{a_2}(1) \end{equation} $$and varying separation $|\vec{r}_1 - \vec{r}_2|$ so that $|\vec{r}_1 - \vec{r}_2| \to c |\vec{r}_1 - \vec{r}_2|$
$$ \begin{equation} \vec{a}_2(c) = c^{-2} \vec{a}(1). \end{equation} $$Over a single timestep $\Delta t$ we can approximate the acceleration as
$$ \begin{equation} \vec{a}_2(t=0) \simeq \frac{\vec{v}_2(t=\Delta t) - \vec{v}_2(t=0)}{\Delta t}. \end{equation} $$For this specific algorithm we have that
$$ \begin{equation} \vec{v}_2(t=\Delta t) = \vec{v}_2(t=0) + \Delta t \, \vec{a}_2(t=0), \end{equation} $$which means that the approximation for the acceleration is, in fact, exact:
$$ \begin{equation} \vec{a}_2(t=0) = \frac{\vec{v}_2(t=\Delta t) - \vec{v}_2(t=0)}{\Delta t}. \end{equation} $$For a general algorithm we would have a first order in $\Delta t$ error in our approximation to the acceleration, so would have to use Richardson extrapolation methods as before in order to calculate the acceleration, and hence check the force.
In [22]:
body_1 = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_2 = ([-1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_1_2_separation = ([3.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_1_2_mass = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.2*SOLAR_MASS)
two_bodies = [copy.deepcopy(body_1), copy.deepcopy(body_2)]
two_bodies_2_separation = [copy.deepcopy(body_1_2_separation), copy.deepcopy(body_2)]
two_bodies_2_mass = [copy.deepcopy(body_1_2_mass), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies, pairs=combinations(two_bodies))
advance(dt=0.01, n=1, bodies=two_bodies_2_separation, pairs=combinations(two_bodies_2_separation))
advance(dt=0.01, n=1, bodies=two_bodies_2_mass, pairs=combinations(two_bodies_2_mass))
print("The acceleration (hence velocity) should decrease as separation^2")
print("So these two numbers should match: {} {} (difference: {})".format(
two_bodies[1][1][0],
two_bodies_2_separation[1][1][0]*4.0,
numpy.abs(two_bodies[1][1][0]-two_bodies_2_separation[1][1][0]*4.0)))
print("The acceleration (hence velocity) should increase as mass^1")
print("So these two numbers should match: {} {} (difference: {})".format(
two_bodies[1][1][0],
two_bodies_2_mass[1][1][0]/2.0,
numpy.abs(two_bodies[1][1][0]-two_bodies_2_mass[1][1][0]/2.0)))
Of course, we could have just got (un)lucky in comparing single data points when we vary either $b$ or $c$. Instead we can choose a set of points at random, and fit to a more general model; for example, fit to
$$ \begin{equation} \vec{a}_2 (b) = \sum_{i=0}^4 p_i b^i \end{equation} $$or
$$ \begin{equation} \vec{a}_2 (c) = \sum_{i=0}^4 p_i c^{-i}: \end{equation} $$
In [23]:
bodies_list_separation = []
separation_scale = numpy.random.rand(5)
separations = separation_scale + 1.0
separation_v = numpy.zeros_like(separations)
for i, scale in enumerate(separation_scale):
body_1_separation = ([scale, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
two_bodies_separation = [copy.deepcopy(body_1_separation), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies_separation, pairs=combinations(two_bodies_separation))
bodies_list_separation.append(two_bodies_separation)
separation_v[i] = two_bodies_separation[1][1][0]
bodies_list_mass = []
mass_scale = numpy.random.rand(5)
masses = 0.1 * mass_scale
mass_v = numpy.zeros_like(masses)
for i, scale in enumerate(mass_scale):
body_1_mass = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*scale*SOLAR_MASS)
two_bodies_mass = [copy.deepcopy(body_1_mass), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies_mass, pairs=combinations(two_bodies_mass))
bodies_list_mass.append(two_bodies_mass)
mass_v[i] = two_bodies_mass[1][1][0]
In [24]:
p_separation = numpy.polyfit(1./separations, separation_v, len(separations)-1)
p_mass = numpy.polyfit(masses, mass_v, len(masses)-1)
print("We expect the third-to-last (separation^{-2}) coefficient to dominate:")
print("Coefficients from separation: {}".format(p_separation))
print("Coefficient of separation^{{-2}}: {:.5g}".format(p_separation[-3]))
print("Largest other coefficient: {:.5g}".format(numpy.max(numpy.abs(numpy.delete(p_separation,-3)))))
print("We expect the second-to-last (mass^{1}) coefficient to dominate:")
print("Coefficients from mass: {}".format(p_mass))
print("Coefficient of mass^{{1}}: {:.5g}".format(p_mass[-2]))
print("Largest other coefficient: {:.5g}".format(numpy.max(numpy.abs(numpy.delete(p_mass,-2)))))
We we see that the expected coefficient dominates, but we're not getting a fit that's absolutely perfect. This is likely due to the limitations of the fitting algorithm.
So, we have an algorithm that
This strongly suggests that it implements Newton's laws, and the force term that's implemented has the form
$$ \vec{F} = G \frac{m_1 m_2 (\vec{r}_2 - \vec{r}_1)}{|\vec{r}_2 - \vec{r}_1|^3}. $$The only remaining question: is the value of $G$ correct?
Our previous calculations give us the value of $G$ internal to the algorithm; we just take our value for the acceleration $\vec{a}_2 = \vec{F}/m_2$ and take out the masses and separations:
In [25]:
dt = 0.01
separation = 2.0
mass = 0.1
G = two_bodies[1][1][0]/mass*separation**2/dt
print("Algorithm G, in units of AU^3 SolarMass^{{-1}} years^{{-2}}: {}".format(G))
In [26]:
from scipy import constants
print("Accepted value of G, in units of {}: {}".format(
constants.unit('Newtonian constant of gravitation'),
constants.G))
In [27]:
solar_mass_in_kg = 1.9891e30
year_in_seconds = 3.15569e7
AU_in_metres = 149597870700.0
print("Algorithm G, in units of m^3 kg^-1 s^-2: {}".format(
G*(AU_in_metres**3/(solar_mass_in_kg)/year_in_seconds**2)))
So the constants agree to three significant figures. Noting that the conversion factors used are only given to five or six significant figures, and the experimental uncertainty in the value of the gravitational constant, I would say this check is close enough.
The method that got me to this conclusion is a classic *just Google it* style, as follows. Looking at the original n-body example there's no useful documentation, but the [about section](http://benchmarksgame.alioth.debian.org/u32/performance.php?test=nbody#about) says that is uses a simple [symplectic integrator](http://en.wikipedia.org/wiki/Symplectic_integrator). The [first order example](http://en.wikipedia.org/wiki/Symplectic_integrator#A_first-order_example) is the [semi-implicit Euler method](http://en.wikipedia.org/wiki/Semi-implicit_Euler_method), which appears to match the code. ↩