fMKS is currently being developed with Dask support. Currently the generate_cahn_hilliard_data function generates data using Dask. This is an embarrisegly parallel workflow as typically for MKS many Cahn-Hilliard simulations are required to calibrate the model. The following is tested using both the threaded and multiprocessing schedulers. Currently the author can not get the distributed scheduler working.
In [15]:
import numpy as np
import dask.array as da
from fmks.data.cahn_hilliard import generate_cahn_hilliard_data
import dask.threaded
import dask.multiprocessing
The function time_ch calls generate_cahn_hilliard_data to generate the data. generate_cahn_hilliard_data returns the microstructure and response as a tuple. compute is called on the response field with certain number of workers and with a scheduler.
In [10]:
def time_ch(num_workers,
get,
shape=(48, 200, 200),
chunks=(1, 200, 200),
n_steps=100):
generate_cahn_hilliard_data(shape,
chunks=chunks,
n_steps=n_steps)[1].compute(num_workers=num_workers,
get=get)
In [13]:
for n_proc in (8, 4, 2, 1):
print(n_proc, "thread(s)")
%timeit time_ch(n_proc, dask.threaded.get)
In [16]:
for n_proc in (8, 4, 2, 1):
print(n_proc, "process(es)")
%timeit time_ch(n_proc, dask.multiprocessing.get)