Nous définissons ici le nombre d'ile à utiliser ainsi que le niveau de lissage des courbes.
In [2]:
N=4
A=1000
OPS=['bit-flip', '1-flip', '3-flip', '5-flip']
Puis quelques routines…
In [3]:
import pandas as pd
In [4]:
import matplotlib as mpl
mpl.rc('figure', figsize=(10, 8))
In [5]:
%%script /Dev/dim/release/application/OneMax/onemax -h
Nous lançons le test suivant: l'algorithme DIM appliqué au problème du OneMax avec les paramétres:
In [7]:
%%script timeout 10 /Dev/dim/release/application/OneMax/onemax --monitorPrefix=/tmp/onemax_test_result --targetFitness=1000 --nislands=4
In [8]:
!tail /tmp/onemax_test_result_monitor_*
In [10]:
data_set = [pd.read_csv('/tmp/onemax_test_result_monitor_%d' % i, index_col=['DateTime','migration'], parse_dates='DateTime') for i in range(N)]
mig_set = [data_set[i].reset_index(0) for i in range(N)]
Nous illustrons ci-dessous l'actractivité sans et avec le cumul des valeurs.
In [11]:
attractiveness = pd.concat([pd.concat([mig_set[j]['P%dto%d' % (j,i)] for j in range(N)],axis=1).sum(axis=1) for i in range(N)],axis=1)
attractiveness.rename(columns=dict(zip(attractiveness.columns,OPS)), inplace=True)
fig, axes = subplots(nrows=2,ncols=2)
for affinity, ax in [(1,axes[0,0]), (10,axes[0,1]), (100,axes[1,0]), (1000,axes[1,1])]:
attractiveness[::affinity].plot(ax=ax, title="smoothing = %d" % affinity).set_ylabel('nb individuals')
attractiveness.cumsum().plot(title='sum of cumul').set_ylabel('cumul of nb individuals')
Out[11]:
In [12]:
nbindis = pd.concat([mig_set[i]['nb_individual_isl%d' % i] for i in range(N)], axis=1)
nbindis.rename(columns=dict(zip(nbindis.columns,OPS)), inplace=True)
nbindis_max = nbindis.max(axis=1); nbindis_max.name = 'nbindis max'
nbindis_avg = nbindis.mean(axis=1); nbindis_avg.name = 'nbindis avg'
#nbindis = nbindis.join(nbindis_max).join(nbindis_avg)
fig, axes = subplots(nrows=2,ncols=2)
for affinity, ax in [(1,axes[0,0]), (10,axes[0,1]), (100,axes[1,0]), (1000,axes[1,1])]:
nbindis[::affinity].plot(ax=ax, title="smoothing = %d" % affinity).set_ylabel('nb individuals')
[ax.set_ylabel('nb individuals') for ax in nbindis[::A].plot(subplots=True, title="smoothing = %d" % A)]
nbindis.cumsum().plot(title='sum of cumul').set_ylabel('cumul of nb individuals')
Out[12]:
In [13]:
fig, ax = subplots()
ax.set_ylabel('nb individuals')
nbindis.boxplot(ax=ax)
Out[13]:
In [14]:
nbindis.describe()
Out[14]:
In [15]:
bests = pd.concat([mig_set[i]['best_value_isl%d' % i] for i in range(N)], axis=1)
bests.rename(columns=dict(zip(bests.columns,OPS)), inplace=True)
bests_max = bests.max(axis=1); bests_max.name = 'bests max'
bests_avg = bests.mean(axis=1); bests_avg.name = 'bests avg'
#bests = bests.join(bests_max).join(bests_avg)
bests[::A].plot().set_ylabel('fitness')
Out[15]:
In [16]:
nbindis_bests_max = nbindis.join(bests_max).set_index('bests max')
fig, axes = subplots(nrows=2,ncols=2)
for affinity, ax in [(1,axes[0,0]), (10,axes[0,1]), (100,axes[1,0]), (1000,axes[1,1])]:
ax = nbindis_bests_max[::affinity].plot(ax=ax, title='smoothing = %d' % affinity)
ax.set_ylabel('nb individuals')
ax.set_xlabel('fitness')
In [ ]: