In [ ]:
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'mpi': True}
You can then run the sample script in the examples folder to run the gaussian example on e.g. 16 processors using
$> mpirun -np 16 python gaussian.py gaussian_params.ini
If the forward model simulation can be run in parallel, then it is possible to split the MPI communicator in astroABC such that some nodes run the sampler while other nodes are reserved for each particle to run a simulation in parallel.
Again this is run using e.g. $> mpirun -np 64
and in the astroabc settings you should specify both 'mpi':True and 'mpi_splitcomm': True.
In [ ]:
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'pert_kernel':2,\
'mpi':True,'mpi_splitcomm': True, 'num_abc': 4}
An additional flag which needs to be set is 'num_abc', which specifies how many processors are to be allocated to the abc sampler. The rest of the processors are divided evenly amongst these processors to use in running the simulation. Note as processor 0 controls many of the communications it is not involved in the sampling.
In the above example specifying: mpirun -np 64 and 'num_abc': 4 will run the sampler on 3 nodes in a pool, each of which have 20 separate processors available to them to launch the simulation.
There is a working example of how to implement this in the examples directory.
For smaller jobs the Python multiprocessing option is available which can spawn multiple processes but which are still bound within a single node. To use the mp option in astroABC simply set 'mp': True in the keywords for the class. An additional option is to specify the number of threads using 'num_proc':number of threads for mp setting. If none is specified then all available threads are used.
In [ ]:
#to run on 4 threads
prop={'dfunc':dist_metric, 'outfile':"gaussian_example.txt", 'verbose':1, 'adapt_t': True, 'mp': True, 'num_proc':4}