The mandelbrot set is a beautiful fractal (Wikipedia). More precisely, it contains all numbers from the complex plane where the complex quadratic polynomial
\begin{align} z_{n+1} = z_{n}^2 + c \end{align}remains bounded. A complex number $c$ is part of the Mandelbrot set when starting with $z_0=0$ and applying the iteration repeatedly, the absolute value of $z_n$ remains smaller or equal than 2 regardless how large $n$ becomes.
Moreover, one can draw very beautiful images by looking at the escape time of a numerical computation: We will numerically determine whether a complex number $c$ belongs to the set, and if not we will keep track how many iterations are needed until $|z_n| > 2$. Plotting the escape times of a sampled grid of complex numbers as a 2D image will yield the famous Apfelmaennchen you might have seen before.
Write a function mandelbrot(relim, imlim, resteps, imsteps, maxiterations)
that computes and returns the escape times for the mandelbrot set.
relim
and imlim
are tuples that define the boundary of the complex plane (e.g. relim=(-2,1)
and imlim=(-1,1)
), for convenience imlim
should contain real numbers although it represents the complex axis.resteps
and imsteps
are integers and define the sampling resolution along each axis (e.g. 300 and 200 steps).maxiterations
is an integer defining the maximum number of iterations (e.g. 50).
The function should sample complex numbers from the plane as defined by the parameters relim, imlim, resteps, imsteps
and repeatedly apply the quadratic polynomial from above until the absolute value of a complex number is larger than 2. In case a maximum number of iterations is reached (maxiterations
) the number is believed to be part of the set.
maxiterations
indicate complex numbers that are believed to be part of the set.mandelbrot
may work:Create a 2D numpy array containing the escape times, in the beginning filled with zeros.
Secondly, create two 1D arrays sampling each dimension of the complex plane. You might want to take a look at the np.linspace function. You can multiply one of the arrays with 1j
to create the imaginary axis. Later on you can calculate each starting complex number simply from summing individual elements of both arrays.
The easy brute-force solution involves three nested for loops. Be aware that you can use python's break
statement to leave a for loop and spare unnecessary computations.
Iterate at most maxiterations
times for any value in your complex plane and apply the quadratic polynomial from above. If the absolute value gets larger than 2 you can stop iterating and store the number of iterations you have needed in the 2D escape array. Thus, for complex numbers that are actually part of the Mandelbrot set, the 2D array of escape times should contain the value maxiterations
.
You can aim for an optimized version that uses vectorized computation. Here you should only need a single for loop.
Create a second and third 2D array containing the starting complex numbers and the intermediate values, checkout the meshgrid function to create these.
Use the second 2D array to iteratively update all values in the third array at once (vectorized computation!). Mask all values that already escaped the boundary via boolean indexing to speed up the computation.
Remember that boolean indexing creates copies, not views! So you need to find a smart way to only apply the polynomial to values that haven't escaped without always keeping the full 2D array, otherwise internally numpy has to reiterate the full array every timestep. If you also create mesh grids of matrix indices you can reduce the 2D arrays each iteration to 1D arrays containing only the currently not escaped values. This is tricky!
Now you can use finer resolutions than above!
In [1]:
import numpy as np
def mandelbrot(relim=(-2,1), imlim=(-1, 1), resteps=300, imsteps=200, maxiterations=50):
"""Computes the *escape times* of the numeric approximation of the Mandelbrot set.
Values of ``maxiterations`` refer to complex numbers that are believed to be part of the set.
:param relim: The limits of the real axis
:param imlim: The limits of the imaginary axis
:param resteps: Sampling resolution along real axis
:param imsteps: Sampling resolution along the imaginary axis
:param maxiterations: Maximum number of iterations
:return:
2D Array of escape times
1D Array of real samples
1D Array of imaginary samples
"""
realaxis = np.linspace(relim[0], relim[1], resteps) # Sample the real axis
imaxis = np.linspace(imlim[0], imlim[1], imsteps) * 1j # Sample the imaginary axis
escape = np.zeros((imsteps, resteps), dtype=int) # 2D array of escape times
for irun in range(resteps): # Iterate over real axis
for jrun in range(imsteps): # Iterate over imaginary axis
c = realaxis[irun] + imaxis[jrun] # Starting value
z = 0.0 # Helper variable
for krun in range(maxiterations): # Compute the escape time
z = z**2 + c
if np.abs(z) >= 2:
break # c is not part of the Mandelbrot set
else:
krun = maxiterations # If the for loop did not break c is part of the set
escape[jrun, irun] = krun
return escape, realaxis, imaxis
In [2]:
xlim = (-2,1)
ylim = (-1,1)
xsteps = 300
ysteps = 200
maxiterations = 50
In [3]:
escape, realaxis, imaxis = mandelbrot(xlim, ylim, xsteps, ysteps, maxiterations)
In [4]:
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(escape, aspect='auto', origin='lower')
plt.xticks(np.arange(xsteps)[::50], ['%.2f' % x for x in realaxis[::50]])
plt.yticks(np.arange(ysteps)[::50], ['%.2f' % x.imag for x in imaxis[::50]])
plt.xlabel('real axis')
plt.ylabel('imaginary axis')
Out[4]:
In [5]:
def mandelbrot_pro(relim=(-2,1), imlim=(-1, 1), resteps=1500, imsteps=1000, maxiterations=50):
"""As function mandelbrot, but faster if the number of maxiterations is rather low.
It uses a vectorized approach.
"""
realaxis = np.linspace(relim[0], relim[1], resteps)
imaxis = np.linspace(imlim[0], imlim[1], imsteps) * 1j
escape = np.ones((imsteps, resteps), dtype=int) * maxiterations
realarray, imarray = np.meshgrid(realaxis, imaxis)
complexarray = realarray + imarray
indices_real = np.arange(len(realaxis)) # index array
indices_im = np.arange(len(imaxis)) # index array
mesh_indices_real, mesh_indices_im = np.meshgrid(indices_real, indices_im) # Create a mesh version
zarray = complexarray.copy()
for irun in range(maxiterations):
if len(zarray) == 0:
break
zarray *= zarray # Application of the polinomial
zarray += complexarray
mask = np.abs(zarray) >= 2 # Mask all values that already escaped
escape[mesh_indices_im[mask], mesh_indices_real[mask]]=irun # Remembert the escape time
inv_mask = np.invert(mask) # Invert the mask to keep only values that have not escaped, yet
mesh_indices_real = mesh_indices_real[inv_mask] # Filter the indeices
mesh_indices_im = mesh_indices_im[inv_mask]
zarray = zarray[inv_mask] # Filter the z values
complexarray = complexarray[inv_mask] # Filter the starting values
return escape, realaxis, imaxis
In [6]:
xlim = (-2,1)
ylim = (-1,1)
xsteps = 1500
ysteps = 1000
maxiterations = 50
In [7]:
escape, realaxis, imaxis = mandelbrot_pro(xlim, ylim, xsteps, ysteps, maxiterations)
In [8]:
plt.imshow(escape, aspect='auto', origin='lower')
plt.xticks(np.arange(xsteps)[::100], ['%.2f' % x for x in realaxis[::100]])
plt.yticks(np.arange(ysteps)[::100], ['%.2f' % x.imag for x in imaxis[::100]])
plt.xlabel('real axis')
plt.ylabel('imaginary axis')
Out[8]:
In [9]:
"""Short timing test"""
%timeit mandelbrot((-2,1), (-1,1), 45, 30, 10)
%timeit mandelbrot_pro((-2,1), (-1,1), 45, 30, 10)
We are going to simulate our first neuron model with Euler integration (Wikipedia). Basically, our neuron model consists of a single differential equation that describes the development of the membrane voltage over time. For now let's assume this equation is an arbitrary function $f$: \begin{align} \frac{dV}{dt} = f(V) \end{align} To obtain the voltage as a function of time, i.e. $V(t)$, we have to solve the differential equation. Lucky for you, we are in the computer practical and not the analytical tutorial. We are going to solve it numerically, so there's no need for a complicated Ansatz :-)
As said before, we will use simple Euler integration. Accordingly, if we assume discretized time we can easily compute $V(t+1)$, the membrane voltage of the next time step, in case we now the previous voltage $V(t)$: \begin{align} V(t+1) = V(t) + f(V) * dt \end{align} with $dt$ the size of the discretized timesteps.
If we start with a chosen initial value of $V(0)$ we can iteratively solve the differential equation.
By this method we can simulate very complex neuron models. Let's simulate a rescaled version of the exponential integrate and fire neuron (Scholarpedia): \begin{align} \frac{dV}{dt} = -V + \exp(V) + I \end{align}
$I$ describes a fixed input current. We will simulate several neurons fed with different current values. For some values of $I$ the membrane potential $V$ will rise to infinity, this corresponds to the upstroke of an action potential (you remember action potentials from a neurobio course, right?). However, our neuron model cannot recover from this upstroke by itself. For a smooth recovery we would need a second differential equation. However, we will keep our model simple and add a so called reset rule: whenever $V$ crosses a particular threshold $V(t)\geq V_t$ then we set it back to a reset value at the next timestep $V(t+1)=V_r$.
Write a function expIF_neuron(V, I, Vt, Vr, duration, dt)
that simulates one or more exponetial integrate-and-fire neurons.
V
can be a scalar or numpy array describing the initial conditionsI
can be a scalar or numpy array describing the input currentsVt
is a scalar value defining the spiking thresholdVr
is a scalar defining the reset value after threshold crossingduration
gives the length of the simulationdt
describes the stepsize of the Euler integrationV
and I
are arrays, the function should be vectorized, i.e. there should only be a single loop over all timesteps, but no loop over all neurons!Simulate 5 neurons at once (do NOT call the function 5 times!) with 5 different input currents $I\in\{-3.0, -2.0, -1.0, 0.0, 1.0\}$.
In [10]:
## The template: ##
def expIF_neuron(V=-1.0, I=0.0, Vt=5.0, Vr=-1.0, duration=10.0, dt=0.01):
"""Numerically integrates the expIF neuron membrane equation with the Euler-Method.
The neurons obey a reset rule, when the membrane potential crosses `Vt`
it is set back to `Vr`.
:param V: array of initial membrane values (or a scalar)
:param VT: spiking threshold (scalar)
:param I: array of input currents (or scalar)
:param duration: duration of experiment (scalar)
:param dt: stepsize of Euler integration (scalar)
:return:
2D array of voltage time series, first dimension the neurons,
second dimension the voltage trace.
First entries contain the initial values.
1D array of simulation times
"""
steps = int(duration/dt) # Calculate the number of simulation steps
if isinstance(V, np.ndarray): # V can be scalar or an array, we need to check first
nneurons = len(V) # Infer the number of neurons from the length of the initial conditions
else:
nneurons = 1
V_series = np.zeros((nneurons, steps+1)) # Array that will contain the voltage traces
# 1st dim neurons, 2nd dim voltage traces
# i.e. V[2,10] would return the voltage of neuron #2 at the 10th timestep!
# Wee need steps+1 since the 0th entry should contain the initial conditions
V_series[:, 0] = V # Set initial conditions
times = np.zeros(steps+1) # Array of timesteps
for step in range(1, steps+1): # Loop starting from step 1 (0th contains initial conditions)
############# Your code ##############
# Manipulate V_series here to simulate the neuron model
# Iteratively compute f(V(t)) = -V(t) + exp(V(t)) + I and V(t+1) = V(t) + f(V(t)) * dt
# Do not introduce another for loop, try to think vectorized
# Try using boolean indexing to implement the threshold crossing and voltage reset
######### End of your code ###########
times[step] = times[step-1] + dt # You actually don't need the times explicitly, but returning them
# will make plotting easier
return V_series, times
In [11]:
def expIF_neuron(V=-1.0, I=0.0, Vt=5.0, Vr=-1.0, duration=10.0, dt=0.01):
"""Numerically integrates the expIF neuron membrane equation with the Euler-Method.
The neurons obey a reset rule, when the membrane potential crosses `Vt`
it is set back to `Vr`.
:param V: array of initial membrane values (or a scalar)
:param VT: spiking threshold (scalar)
:param I: array of input currents (or scalar)
:param duration: duration of experiment (scalar)
:param dt: stepsize of Euler integration (scalar)
:return:
2D array of voltage time series, first dimension the neurons,
second dimension the voltage trace.
First entry contains the initial values.
1D array of simulation times
"""
steps = int(duration/dt) # Calculate the number of simulation steps
if isinstance(V, np.ndarray): # V can be scalar or array, we need to check first
nneurons = len(V)
else:
nneurons = 1
V_series = np.zeros((nneurons, steps+1)) # Array that will contain the voltage traces
# 1st dim neurons, 2nd dim voltage traces
# i.e. V[2,10] would return the voltage of neuron 3 at the 11th timestep!
# Wee need steps+1 since the 0th entry should contain the initial conditions
V_series[:, 0] = V # Set initial conditions
times = np.zeros(steps+1) # Array of timesteps
for step in range(1, steps+1): # Loop starting from step 1 (0th contains initial conditions)
prev_V = V_series[:, step-1]
dV = -prev_V + np.exp(prev_V) + I
next_V = prev_V + dt * dV # Euler step
next_V[prev_V>=Vt] = Vr # Voltage reset
prev_V[prev_V>=Vt] = Vt # For better plotting we bound also the previous step
V_series[:, step] = next_V
times[step] = times[step-1] + dt # You actually don't need the times explicitly, but returning them
# will make plotting easier
return V_series, times
In [12]:
nneurons = 5
Vt = 5.0
Vr = -1.0
dt = 0.01
duration = 10.0
I = np.linspace(-3.,1.0, nneurons)
V = np.ones(nneurons)*-1.0
In [13]:
V_series, times = expIF_neuron(V, I, Vt, Vr, duration, dt)
In [14]:
for neuron in range(nneurons):
plt.plot(times, V_series[neuron,:], linewidth=2, label='I=%.1f' % (I[neuron]))
ax = plt.gca()
ax.ticklabel_format(useOffset=False) # prevents strange units
plt.xlabel('time')
plt.ylabel('V')
plt.legend()
Out[14]:
In [ ]: