In physics, mathematics, statistics, and economics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, thus represent a universality.
The Abelian sandpile model, also known as the Bak-Tang-Wiesenfeld model, was the first discovered example of a dynamical system displaying self-organized criticality.
The model is a cellular automaton. In its original formulation, each cell on a finite grid has an associated value that corresponds to the slope of the pile. This slope builds up as "grains of sand" (or "chips") are randomly placed onto the pile, until the slope exceeds a specific threshold value at which time that site collapses transferring sand into the adjacent sites, increasing their slope. There is a successive random placement of sand grains on the grid; each such placement of sand at a particular site may have no effect, or it may cause a cascading reaction that will affect many sites.
The original interest behind the model stemmed from the fact that in simulations on lattices, it is attracted to its critical state, at which point the correlation length of the system and the correlation time of the system go to infinity, without any fine tuning of a system parameter. This contrasts with earlier examples of critical phenomena, such as the phase transitions between solid and liquid, or liquid and gas, where the critical point can only be reached by precise tuning (e.g., of temperature). Hence, in the sandpile model we can say that the criticality is self-organized.
Once the sandpile model reaches its critical state there is no correlation between the system's response to a perturbation and the details of a perturbation. Generally this means that dropping another grain of sand onto the pile may cause nothing to happen, or it may cause the entire pile to collapse in a massive slide. The model also displays 1/ƒ noise, a feature common to many complex systems in nature.
The example below displays a standard Abelian sandpile. You can see the distribution of the grains along time, how they expand from where they are dropped. Modify the parameteres in the code to explore some variations.
In [1]:
%%HTML
<iframe src="https://www.openprocessing.org/sketch/434784/embed/" width="450" height="510"></iframe>
To analyse the results, we will code the sandpile algorithm. Every time a new grain is dropped at the center, this action may produce nothing, or may produce a displacement of several other grains. Sometimes, this displacement spreads all over the sandpile in a big avalanche that moves from the center to the border and backwards. The size of the avalanche for each new grain is the number of grains displaced by that grain.
The sandpile function returns the board containing the grains and the number of grains displaced on every time step. To save computation time, one time step means dropping the maximum number of grains allowed for a pile, as dropping less grains produces no avalanches.
In [23]:
import math
import numpy as np
# Algorithm of the Sandpile model
def sandPileModel(size, numGrainsDropped, maxGrainsPerPile, probabilityFalldown):
sizeBoard = size # size of the board (each pixel is a pile)
board = np.zeros((sizeBoard+2,sizeBoard+2))
# Grains fallen from a pile when it has more than maxGrainsPerPile
grainsFallen = maxGrainsPerPile * probabilityFalldown;
# Total grains displaced for every timestep. In each timestep we drop 'maxGrainsPerPile' grains.
avalancheSize = np.zeros((math.floor(numGrainsDropped/maxGrainsPerPile),1))
for i in range(len(avalancheSize)):
grainsDisplaced = 0
board[int(sizeBoard/2)+1,int(sizeBoard/2)+1] += maxGrainsPerPile
# run until a stable state is reached
while np.max(board) >= maxGrainsPerPile:
# find the highest piles
toohigh = board >= maxGrainsPerPile
grainsDisplaced += grainsFallen * np.sum(toohigh)
# decrease highest piles
board[toohigh] -= grainsFallen
# increase adjacent piles
board[1:,:][toohigh[:-1,:]] += grainsFallen / 4
board[:-1,:][toohigh[1:,:]] += grainsFallen / 4
board[:,1:][toohigh[:,:-1]] += grainsFallen / 4
board[:,:-1][toohigh[:,1:]] += grainsFallen / 4
# reset the overspill at the borders
board[0:1,:] = 0
board[1+sizeBoard:,:] = 0
board[:,0:1] = 0
board[:,1+sizeBoard:] = 0
# Store the number of grains displaced
avalancheSize[i] = grainsDisplaced
# remove empty lines and columns
board = board[1:1+sizeBoard,1:1+sizeBoard]
board = [x for x in board if any(x)]
board = np.transpose(board)
board = [x for x in board if any(x)]
board = np.transpose(board)
return [board, avalancheSize]
Now, we will call the function with different parameters to analize the results. We will plot the resulting board, the size of avalanches in every timestep, and the histogram of those avalanches. The histogram let us know how many avalanches there has been with a certain size range.
In [25]:
%matplotlib inline
from matplotlib import pyplot as plt
# Width of the board (height is equal to width)
BOARD_SIZE = 100
# Maximum number of stackable sand grains that will cause an avalanche
MAX_GRAINS = 4
# Probablity of grains falling down to adjacent piles during an avalanche
FALL_PROBABILITY = 1
# Total number of grains dropped on the board
NUM_GRAINS = 10000
# Calculate the simulation
[board, avalancheSize] = sandPileModel(BOARD_SIZE, NUM_GRAINS, MAX_GRAINS, FALL_PROBABILITY)
# Figures
fig = plt.figure(figsize=(15,4))
# Sandpile boad
plt.subplot(1,3,1)
plt.axis('off')
plt.imshow(board / np.max(board), cmap='gray', vmin=0, vmax=1)
# Chronogram of avalanches
ax = plt.subplot(1,3,2)
plt.xlabel('Time')
plt.ylabel('Avalanche size')
#ax.set_xscale('log')
#ax.set_yscale('log')
#ax.set_xlim([max(0, len(avalancheSize)-300), len(avalancheSize)])
plt.plot(range(len(avalancheSize)), avalancheSize)
# Histogram of avalanches
ax = plt.subplot(1,3,3)
plt.xlabel('Avalanche size')
plt.ylabel('Number of ocurrences')
#plt.hist(avalancheSize, bins=np.linspace(0,max(avalancheSize),num=10), rwidth=0.9)
ax.set_xscale('log')
ax.set_yscale('log')
plt.hist(avalancheSize, bins=np.logspace(np.log10(MAX_GRAINS),np.log10(max(avalancheSize)),num=10), rwidth=0.9)
plt.show()
The Bak–Sneppen model is a simple model of co-evolution between interacting species. It was developed to show how self-organized criticality may explain key features of the fossil record, such as the distribution of sizes of extinction events and the phenomenon of punctuated equilibrium.
The model dynamics repeatedly eliminates the least adapted species and mutates it and its neighbors to recreate the interaction between species. We consider N species, which are associated with a fitness factor f(i). They are indexed by integers i around a ring. The algorithm consists in choosing the least fit species, and then replacing it and its two closest neighbors (previous and next species) by new species, with a new random fitness. After a long run there will be a minimum required fitness, below which species don't survive. These "long-run" events are referred to as avalanches, and the model proceeds through these avalanches until it reaches a state of relative stability where all species' fitness are above a certain threshold.
As well, we can use this model to study the scientific progress. Accounts of scientific change roughly fall into two categories, one and two-process change. One-process views hold that scientific change is a single process of movement toward an independent point of reference, for example Popper's cycle of conjecture and refutation. One of the most influential criticisms of this view is by Thomas Kuhn, who held that standards for science are not independent, but coevolve with scientific activity itself. Science then does not evolve toward a goal but away from goals previously set, with no fixed point left to compare its progress against. Without such a stable ‘‘Archimedean platform,’’ there is nothing against which science can meaningfully be said to cumulate. Moreover, changes of standard caused by scientific change can cause cascades of further scientific changes.
This example displays the Bak-Sneppel model. Every column represents one specie. As the species become older, their color change from light green to dark blue. When a specie has the least fitness, it is eliminated along its neighbours, so new species are created with their age restarted to zero and changing their colors to light green.
We can modify the original parameters to analyse their effect on the extinction avalanches. Go to OpenProcessing code and, in the "BS" tab, modify:
In [4]:
%%HTML
<iframe src="https://www.openprocessing.org/sketch/434390/embed/" width="500" height="400"></iframe>
In [2]:
import math
import numpy as np
# Bak-Sneppen algorithm
def bak_sneppen(N, maxGenerations, neighbourSize, neighbourProb):
agesSize = 300
ages = np.zeros(N)
agesStart = np.zeros((agesSize, N))
agesEnd = np.zeros((agesSize, N))
x = np.zeros(maxGenerations)
f = np.random.rand(N)
p1 = 1
# Iterate on generations
for t in range(maxGenerations):
ages= ages + 1
# Search for the specie with least fitness
ind = np.argmin(f)
# Randomly disposal of the least fitness specie and its neighbours
if np.random.rand(1)<p1:
f[ind]=np.random.rand(1)
ages[ind] = 0
for d in range(1,1+math.floor(neighbourSize/2)):
if np.random.rand(1) < neighbourProb:
f[(ind+d)%N] = np.random.rand(1)
ages[(ind+d)%N] = 0
if np.random.rand(1) < neighbourProb:
f[(ind-d)%N] = np.random.rand(1)
ages[(ind+d)%N] = 0
x[t]=np.mean(f)
# Store the values of firsts generations
if t<agesSize:
for iCell in range(N):
agesStart[t, iCell] = ages[iCell]
# Store the values of lasts generations
if maxGenerations-t<agesSize:
for iCell in range(N):
agesEnd[t-maxGenerations, iCell] = ages[iCell]
return [x, agesStart, agesEnd]
The example below plots the mean fitness in a chronogram. At first, this number will grow along time because the least fitness specie is eliminated on each generation. As time goes by, an increasingly random growth is expected because old species will have an almost constant high value of fitness while the new species are introduced with random fitness.
In [20]:
%matplotlib inline
from matplotlib import pyplot as plt
N = 100 # Number of cells in one generation
maxGenerations = 10000 # Calculate up to this maximum number of generations
neighbourSize = 2 # Number (even) of eliminated neighbours (2 = left and right neighbours)
neighbourProb = 1 # Probability of the neighbours to be eliminated (0 < neighbourProb < 1)
# Calculate the simulation
[agesEliminated, agesStart, agesEnd] = bak_sneppen(N, maxGenerations, neighbourSize, neighbourProb)
agesImage = agesStart
# Figures
plt.figure(figsize=(15,4))
# Ages of the eliminated specie
ax = plt.subplot(1,3,1)
plt.title('Age')
plt.xlabel('Species')
plt.ylabel('Generations')
ax.set_xticks([])
plt.imshow(agesImage / np.max(agesImage), cmap='hot_r', vmin=0, vmax=1)
# Chronogram
ax = plt.subplot(1,3,2)
plt.xlabel('Generations')
plt.ylabel('Fitness average')
ax.set_xscale('log')
ax.set_yscale('log')
plt.plot(range(len(agesEliminated)), agesEliminated)
# Histogram
ax = plt.subplot(1,3,3)
plt.xlabel('Fitness average')
plt.ylabel('Number of ocurrences')
#plt.hist(agesEliminated, bins=np.linspace(1e-4,max(agesEliminated),num=10), rwidth=0.9)
plt.hist(agesEliminated, bins=np.logspace(-0.6, np.log10(max(agesEliminated)),num=10), rwidth=0.9)
ax.set_xscale('log')
ax.set_yscale('log')
plt.show()
A fractal dimension is a ratio providing a statistical index of complexity comparing how detail in a pattern (strictly speaking, a fractal pattern) changes with the scale at which it is measured. It has also been characterized as a measure of the space-filling capacity of a pattern that tells how a fractal scales differently from the space it is embedded in; a fractal dimension does not have to be an integer.
For instance, if we measure the lenght of the England coast, we will find different values according the unit chosen to do the measurement.
We may apply the fractal ideas to time signals. For instance, the box counting method would consist in covering with boxes the line of the signal when the value of the signal is displayed along time.
There are several algorithms that measure the fractal index of time signals. We are going to use the DFA (Detrended Fluctuation Analysis) because its robustness.
On time-signals, there are three chracteristic fractal values: white, pink and brown noise.
Now, we will analyse the results of the sandpile model to check if they are fractal. To do so, we will obtain the fractal index of the avalanche size.
In [34]:
from pyeeg import dfa as dfa
import numpy as np
def plot_dfa(x, cut, step, Nfit):
ix = np.arange(np.log2(len(x)/2), cut, -step)
n = np.round(2**ix)
[alpha, n, F] = dfa(x, L=n)
index = int(len(n)-Nfit)
P = np.polyfit(np.log(n[index:]),np.log(F[index:]), 1)
plt.title("Alpha = {:0.2f}".format(alpha))
plt.xlabel('n')
plt.ylabel('F(n)')
plt.loglog(n, F)
plt.loglog(n[index:], np.power(n[index:], P[0])*np.exp(P[1]), 'r')
return [alpha, n, F]
# Width of the board (height is equal to width)
BOARD_SIZE = 100
# Maximum number of stackable sand grains that will cause an avalanche
MAX_GRAINS = 4
# Probablity of grains falling down to adjacent piles during an avalanche
FALL_PROBABILITY = 1
# Total number of grains dropped on the board
NUM_GRAINS = 5000
# Figure size
plt.figure(figsize=(16, 4), dpi=72)
# Calculate the simulation
[board, avalancheSize] = sandPileModel(BOARD_SIZE, NUM_GRAINS, MAX_GRAINS, FALL_PROBABILITY)
plt.subplot(1,2,1)
plt.xlabel('Generation')
plt.ylabel('Fitness average')
plt.plot(avalancheSize)
# Calculate the fractal index
plt.subplot(1,2,2)
cut=2
step=0.1
[alpha, n, F] = plot_dfa(avalancheSize, cut, step, len(avalancheSize))
Now, we will analyse the results of the Bak-Sneppen model to check if they are fractal. To do so, we will obtain the fractal index of the avalanche size.
In [32]:
from pyeeg import dfa as dfa
import numpy as np
import math
N = 64 # Number of cells in one generation
maxGenerations = 10000 # Calculate up to this maximum number of generations
neighbourSize = 2 # Number (even) of eliminated neighbours (2 = left and right neighbours)
neighbourProb = 0 # Probability of the neighbours to be eliminated (0 < neighbourProb < 1)
# Figure size
plt.figure(figsize=(16, 4), dpi=72)
# Calculate the simulation
[meanFitness, agesStart, agesEnd] = bak_sneppen(N, maxGenerations, neighbourSize, neighbourProb)
plt.subplot(1,2,1)
plt.xlabel('Generation')
plt.ylabel('Fitness average')
plt.plot(meanFitness)
# Calculate the fractal index
plt.subplot(1,2,2)
[alpha, n, F] = plot_dfa(meanFitness[int(maxGenerations/2):], 2, 0.2, maxGenerations/5)