The kuramoto model is defined by the equation $$ \dot \theta_i = \omega_i + K\sum \limits_{i=1}^N \sin(\theta_j -\theta_i) $$. Offers best case 25x faster performance. Completely optimized to use linear algebra operations.
All the processes are now implemented using the Numpy array, leading to a significant improvement in execution time. The performance offered is now $\mathcal{O}(\sqrt n)$.
In [54]:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from numpy import *
import networkx as nx
import random as rd
import cmath as cm
%matplotlib inline
In [55]:
k = 5.0
==================================================================================================================================================
Time
In [56]:
t = linspace(0.0, 2.0, 1000)
Initial State
In [57]:
state_init = 5.0*random.rand(100)
Frequency Distribution
In [58]:
gw = random.normal(2.0, 1.0, 100)
Network
In [59]:
g = nx.watts_strogatz_graph(100,4, 0.4)
adjm = nx.adjacency_matrix(g).todense()
In [60]:
def coupling(state_init):
state_init = state_init.reshape([1,-1])
return state_init - state_init.T
def kuramoto2(state_init,t,gw,k,adjm):
theta_d2 = gw
coupling_matrix = coupling(state_init)
coupling_matrix = multiply(adjm,coupling_matrix)
coupling_matrix = sin(coupling_matrix)
coupling_terms = sum(coupling_matrix, axis =1).T
theta_d2 = theta_d2 + k*coupling_terms
return array(theta_d2)[0]
def evolution(state_init,t,gw,k,adjm):
params = (gw, k, adjm)
state_evolution = odeint(kuramoto2, state_init, t, params)
return state_evolution
In [6]:
def kuraimp(state, t, gw, k, graph):
theta_init =state
theta_d2 = gw
nodes = graph.nodes()
coupling_matrix = zeros([len(nodes), len(nodes)])
for i in nodes:
neigh = graph.neighbors(i)
for j in neigh:
coupling_matrix[i,j] = theta_init[j]-theta_init[i]
#print coupling_matrix
coupling_matrix = sin(coupling_matrix)
coupling_terms = sum(coupling_matrix, axis =1)
theta_d2 = theta_d2 + k*coupling_terms
return theta_d2
In [7]:
%timeit kuramoto2(state_init,t,gw,k, adjm)
In [8]:
%timeit kuraimp(state_init,t,gw,k,g)
In [9]:
%timeit coupling(state_init)
In [10]:
def plot_phase(state_init, t, gw, k, adjm):
states = evolution(state_init, t, gw, k, adjm)
plt.plot(t, states)
plt.show()
In [11]:
%timeit evolution(state_init,t,gw,k,adjm)
In [61]:
plot_phase(state_init, t, gw, k, adjm)
In [13]:
def order_parameter(state_init,t,gw,k,adjm):
states_final = evolution(state_init, t, gw, k, adjm)
iota = 1.0J
ordparam = cm.e**(iota*states_final[-1])
orderparameter = sum(ordparam)/100.0
return abs(orderparameter)
In [37]:
order_parameter(state_init,t,gw,k,adjm)
Out[37]:
In [62]:
#%%timeit
K = linspace(0.0, 2.5, 100)
order = []
for i in K:
order.append(order_parameter(state_init, t, gw,i, adjm))
plt.plot(K, order)
plt.show()
In [104]:
g = nx.complete_graph(100)
adjm = nx.adjacency_matrix(g).todense()
In [105]:
plot_phase(state_init, t, gw, k, adjm)
In [106]:
K = linspace(0.0, 2.5, 100)
order = []
for i in K:
order.append(order_parameter(state_init, t, gw,i, adjm))
plt.plot(K, order)
plt.show()
Apart from the minuscule transient phase at the beginning the graph almost instantly settles into a syncronized state as is expected of a complete graph and is shown by the nature of the evolution of order parameter $r$
In [110]:
g = nx.erdos_renyi_graph(100, 0.01)
adjm = nx.adjacency_matrix(g).todense()
In [113]:
nx.draw(g)
plt.show()
In [102]:
plot_phase(state_init, t, gw, k, adjm)
In [103]:
exec In[66]
First create a ring over n nodes. Then each node in the ring is connected with its k nearest neighbors (k-1 neighbors if k is odd). Then shortcuts are created by adding new edges as follows: for each edge u-v in the underlying “n-ring with k nearest neighbors” with probability p add a new edge u-w with randomly-chosen existing node w. In contrast with watts_strogatz_graph(), no edges are removed.
In [70]:
g = nx.newman_watts_strogatz_graph(100,4, 0.5)
adjm = nx.adjacency_matrix(g).todense()
In [114]:
nx.draw(g)
plt.show()
In [71]:
plot_phase(state_init, t, gw, k, adjm)
In [72]:
exec In[66]
In [116]:
g = nx.barabasi_albert_graph(100,4)
adjm = nx.adjacency_matrix(g).todense()
In [119]:
nx.draw(g)
plt.show()
In [74]:
plot_phase(state_init, t, gw, k, adjm)
In [75]:
exec In[66]
Holme and Kim algorithm for growing graphs with powerlaw degree distribution and approximate average clustering.
Parameters:
n : intthe number of nodes
m : int
the number of random edges to add for each new node
p : float,
Probability of adding a triangle after adding a random edge
seed : int, optional
Seed for random number generator (default=None).
Notes
The average clustering has a hard time getting above a certain cutoff that depends on m. This cutoff is often quite low. Note that the transitivity (fraction of triangles to possible triangles) seems to go down with network size.
It is essentially the Barabási-Albert (B-A) growth model with an extra step that each random edge is followed by a chance of making an edge to one of its neighbors too (and thus a triangle).
This algorithm improves on B-A in the sense that it enables a higher average clustering to be attained if desired.
It seems possible to have a disconnected graph with this algorithm since the initial m nodes may not be all linked to a new node on the first iteration like the B-A model.
In [120]:
g = nx.powerlaw_cluster_graph(100,4,0.05)
adjm = nx.adjacency_matrix(g).todense()
In [121]:
nx.draw(g)
plt.show()
In [99]:
plot_phase(state_init, t, gw, k, adjm)
In [100]:
exec In[66]
In [ ]: