In [1]:
%reset
import numpy as np
import pyevodyn.numerical as num
import matplotlib.pyplot as plt
This notebook analyzes the game in "Evolutionary cycles of defection and cooperation" by Imhof, Fudenberg and Nowak (PNAS, 2005).
We first create a function to get the game as a numpy matrix. Strategies are ALLC, ALLD and TFT. Here r, s, t, p are the parameters of the one-shot PD game (reward, sucker, temptation and punishment); c is the cost of the conditional strategy and m is the expected number of rounds. Default values are those reported in Figure 1 of the paper.
In [2]:
def get_me_the_game(r=3.0, s=0.1, t=5.0, p=1.0, m=10.0, c=0.8):
ans = np.empty(shape=(3,3))
ans[0,0] = r*m
ans[0,1] = s*m
ans[0,2] = r*m
ans[1,0] = t*m
ans[1,1] = p*m
ans[1,2] = t+p*(m-1.0)
ans[2,0] = r*m-c
ans[2,1] = s+p*(m-1.0)-c
ans[2,2] = r*m-c
return ans
We can plot deterministic dynamics using the function replicator_trajectory, in the numerical module. Here we do it for a couple of starting points.
In [21]:
#firt, we get the game
game_matrix = get_me_the_game()
#we define a starting point
x_0 = np.array([0.5, 0.2, 0.3])
# we use the function replicator_trajectory from the numerical module
orbits = num.replicator_trajectory(game_matrix, x_0, maximum_iterations= 80000)
#Plotting stuff
plt.rc('lines', linewidth=2.0)
plt.figure(figsize=(20,4))
plt.plot(orbits[0], 'b-', label='ALLC')
plt.plot(orbits[1], 'r-', label = 'ALLD')
plt.plot(orbits[2], 'g-', label='TFT')
plt.rc('lines', linewidth=2.0)
plt.legend()
plt.title('(ALLC,ALLD,TFT) starting at $x_0$ = ' + str(x_0))
Out[21]:
In [26]:
#firt, we get the game
game_matrix = get_me_the_game()
#we define a starting point
x_0 = np.array([0.4, 0.4, 0.2])
# we use the function replicator_trajectory from the numerical module
orbits = num.replicator_trajectory(game_matrix, x_0, maximum_iterations= 95000)
#Plotting stuff
plt.rc('lines', linewidth=2.0)
plt.figure(figsize=(20,4))
plt.plot(orbits[0], 'b-', label='ALLC')
plt.plot(orbits[1], 'r-', label = 'ALLD')
plt.plot(orbits[2], 'g-', label='TFT')
plt.rc('lines', linewidth=2.0)
plt.legend()
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.title('(ALLC,ALLD,TFT) starting at $x_0$ = ' + str(x_0))
Out[26]:
We will inspect the stationary distribution of a Moran Process with small mutations (i.e., monomorphic populations). Payoff-to-fitness mapping uses an exponential function.
Let us first look at strategy abundance as a function of the intensity of selection. We use the function monomorphous_transition_matrix.
In [13]:
#we set the fixed parameters, mutation probability, population size and the game
u = 0.001
N=30
game_matrix= get_me_the_game()
#build the result for a range of intensity of selections
intensity_vector = np.arange(start=0.0, stop=1.0, step= 0.01)
allc_list= []
alld_list= []
tft_list= []
for intensity_of_selection in intensity_vector:
markov_chain = num.monomorphous_transition_matrix(intensity_of_selection, mutation_probability=u, population_size=N, game_matrix=game_matrix, mapping ='LIN')
(allc, alld,tft) = num.stationary_distribution(markov_chain)
allc_list.append(allc)
alld_list.append(alld)
tft_list.append(tft)
In [17]:
#Plotting stuff
plt.rc('lines', linewidth=2.0)
plt.figure(figsize=(6,5))
plt.plot(intensity_vector, allc_list, 'b-', label='ALLC')
plt.plot(intensity_vector, alld_list, 'r-', label = 'ALLD')
plt.plot(intensity_vector, tft_list, 'g-', label='TFT')
plt.rc('lines', linewidth=2.0)
plt.legend(loc='center right')
#plt.title('Frequency in stationarity')
plt.xlabel('Intensity of selection')
plt.ylabel('Abundance')
plt.xlim((0.0, 0.2))
#plt.ylim((-0.01, 1.01))
#plt.xscale('log')
Out[17]:
We see that cooperation only flourishes for weak intensity of selection! Now let's see how things depend on the expected number of rounds.
In [43]:
#we set the fixed parameters, mutation probability, population size and intensity of selection
u = 0.001
N=30
intensity_of_selection = 0.05
#build the result for a range of m
m_vector = np.arange(start=0.1, stop=20.0, step= 0.1)
allc_list= []
alld_list= []
tft_list= []
for m in m_vector:
game_matrix= get_me_the_game(m=m)
markov_chain = num.monomorphous_transition_matrix(intensity_of_selection, mutation_probability=u, population_size=N, game_matrix=game_matrix, mapping ='EXP')
(allc, alld,tft) = num.stationary_distribution(markov_chain)
allc_list.append(allc)
alld_list.append(alld)
tft_list.append(tft)
#Plotting stuff
plt.rc('lines', linewidth=2.0)
plt.figure(figsize=(10,4))
plt.plot(m_vector, allc_list, 'b-', label='ALLC')
plt.plot(m_vector, alld_list, 'r-', label = 'ALLD')
plt.plot(m_vector, tft_list, 'g-', label='TFT')
plt.rc('lines', linewidth=2.0)
plt.legend(loc='best')
plt.title('Frequency in stationarity')
plt.xlabel('Expected number of rounds')
plt.ylabel('Abundance')
Out[43]:
Cooperation does better with many rounds! Duh.
In [ ]: