Using pyevodyn to study a game with custom payoff function*

* Work in progress

In [10]:
import pyevodyn.utils as utils
import pyevodyn.numerical as num

This notebook analyzes the game in "Via freedom to coercion: The emergence of costly punishment" by Hauert, Traulsen, Brandt, Nowak and Sigmund (Science, 2007).

We first define the payoff structure. For that we will first create two auxiliary functions that give the benefit and effective cost of cooperation given a population composition and other parameters of the game.


In [17]:
def benefit_function(population_composition, r, c, sigma, beta, gamma, group_size, population_size):
    #first how many are there for each strategy
    [xc, xd,xl,xp] = population_composition
    #if almost everyone is a loner there is no benefit
    if  xl >=  population_size-1:
        return sigma
    part_1_a = r*c*(xc+xp)/(population_size-xl-1.0)
    part_1_b = 1.0 - (population_size/(group_size*(population_size-xl)))
    part_1 = part_1_a*part_1_b 
    part_2_a = r*c*(xc+xp)*(xl-group_size+1.0) 
    part_2_b = group_size*(population_size-xl-1.0)*(population_size-xl)
    binomial = utils.binomial_coefficient(xl, group_size-1)/utils.binomial_coefficient(population_size-1, group_size-1)
    part_2 =  binomial*(sigma+ (part_2_a/part_2_b))
    return part_1 + part_2

def contribution_cost(population_composition, r, c, sigma, beta, gamma, group_size, population_size):
    [xc, xd,xl,xp] = population_composition
    if  xl >=  population_size-1:
        return 0.0
    part_1  = 1.0 - (r/group_size)*((population_size-group_size)/(population_size-xl-1.0))
    binomial = utils.binomial_coefficient(xl, group_size-1)/utils.binomial_coefficient(population_size-1, group_size-1)
    part_2_a = (r/group_size)*((xl+1.0)/(population_size-xl-1.0))
    part_2_b = (r*(population_size-xl-2.0)/(population_size-xl-1.0))-1
    return part_1 + binomial*(part_2_a+part_2_b)

Now we are ready to define the payoff function. Our strategies are C, D, L and P (indexed: 0,1,2 and 3). The function needs to respect the signature payoff_function(index, population_composition, kwargs)*, where population_composition is a list that contains how many copies are there of each strategy. Consequently, sum(population_composition)* equals the population size. Default values correspond to those reported in Figure 1 of Hauert et al.


In [32]:
def payoff_function_CDLP(index, population_composition, r=3.0, c=1.0, sigma=1.0, beta=1, gamma=0.3, group_size=5.0):
    [xc, xd,xl,xp] = population_composition
    population_size = sum(population_composition)
    benefit = benefit_function(population_composition, r, c, sigma, beta, gamma, group_size, population_size)
    if index == 0:
        #payoff for cooperators
        return benefit - c*contribution_cost(population_composition, r, c, sigma, beta, gamma, group_size, population_size)
    if index == 1:
        #payoff for defectors
        return benefit - (beta)*(group_size-1.0)*(xp/(population_size-1.0))
    if index == 2:
        #loner payoff
        return sigma
    if index == 3:
        #altruistic punishers
        return benefit - c*contribution_cost(population_composition, r, c, sigma, beta, gamma, group_size, population_size) - gamma*(group_size-1.0)*(xd/(population_size-1.0))
    raise ValueError('You should never be here!')

We are ready to study evolutionary dynamics.

Stochastic dynamics

We will inspect the stationary distribution of a Moran Process with small mutations (i.e., monomorphic populations). Payoff-to-fitness mapping uses an exponential function.

Let's see how the abudance changes as a function of the intensity of selection.


In [53]:
#we set the fixed parameters, mutation probability, population size and the game
u = 0.001
N=30
r=3.0
c=1.0
sigma=1.0
beta=1
gamma=0.3
group_size=5.0
u = 0.001
payoff_function= payoff_function_CDLP
#build the result for a range of intensity of selections
intensity_vector= np.logspace(-4.5, 0.7, num=45) # we will use a logarithmic scale on the x axis
c_list= []
d_list= []
l_list= []
p_list= []
for intensity_of_selection in intensity_vector:
    markov_chain = num.monomorphous_transition_matrix(intensity_of_selection, mutation_probability=u, population_size=N, payoff_function=payoff_function_CDLP, number_of_strategies=4, 
mapping ='EXP',  r=r, c=c, sigma=sigma, beta=beta, gamma=gamma, group_size=group_size)
    (cooperators,defectors,loners, punishers) = num.stationary_distribution(markov_chain)
    c_list.append(cooperators)
    d_list.append(defectors)
    l_list.append(loners)
    p_list.append(punishers)

#Plotting stuff
plt.rc('lines', linewidth=2.0)
plt.figure(figsize=(10,4))
plt.plot(intensity_vector, c_list, 'b-', label='Cooperators')
plt.plot(intensity_vector, d_list, 'r-', label = 'Defectors')
plt.plot(intensity_vector, l_list, 'y-', label='Loners')
plt.plot(intensity_vector, p_list, 'g-', label='Punishers')
plt.xscale("log") #set scale to logarithmic
plt.axis([10**-4.5, 10**0.7, 0, 1.0])
plt.rc('lines', linewidth=2.0)
plt.legend(loc='best')
plt.title('Frequency in stationarity')
plt.xlabel('Intensity of selection')
plt.ylabel('Abundance')


Out[53]:
<matplotlib.text.Text at 0x80531b0>

Altruistic punishment wins when selection is positive!