Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin, called iterables
.
The main homepage has a nice section about MapNode
and iterables
if you want to learn more. Also, if you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out synchronize and intersource.
For example, let's assume we have a node (A) that does simple skull stripping, followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm.
In [ ]:
from nipype import Node, Workflow
from nipype.interfaces.fsl import BET, IsotropicSmooth
# Initiate a skull stripping Node with BET
skullstrip = Node(BET(mask=True,
in_file='/data/ds102/sub-01/anat/sub-01_T1w.nii.gz'),
name="skullstrip")
Create a smoothing Node with IsotropicSmooth
In [ ]:
isosmooth = Node(IsotropicSmooth(), name='iso_smooth')
Now, to use iterables
and therefore smooth with different fwhm
is as simple as that:
In [ ]:
isosmooth.iterables = ("fwhm", [4, 8, 16])
And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
In [ ]:
# Create the workflow
wf = Workflow(name="smoothflow")
wf.base_dir = "/data"
wf.connect(skullstrip, 'out_file', isosmooth, 'in_file')
# Run it in parallel (one core for each smoothing kernel)
wf.run('MultiProc', plugin_args={'n_procs': 3})
If we visualize the graph with exec
, we can see where the parallelization actually takes place.
In [ ]:
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
Image(filename='/data/smoothflow/graph_detailed.dot.png')
Out[ ]:
If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16
.
In [ ]:
!tree /data/smoothflow -I '*txt|*pklz|report*|*.json|*js|*.dot|*.html'
Now, let's visualize the results!
In [ ]:
%pylab inline
from nilearn import plotting
plotting.plot_anat(
'/data/ds102/sub-01/anat/sub-01_T1w.nii.gz', title='original',
display_mode='z', cut_coords=(-20, -10, 0, 10, 20), annotate=False)
plotting.plot_anat(
'/data/smoothflow/skullstrip/sub-01_T1w_brain.nii.gz', title='skullstripped',
display_mode='z', cut_coords=(-20, -10, 0, 10, 20), annotate=False)
plotting.plot_anat(
'/data/smoothflow/_fwhm_4/iso_smooth/sub-01_T1w_brain_smooth.nii.gz', title='FWHM=4',
display_mode='z', cut_coords=(-20, -10, 0, 10, 20), annotate=False)
plotting.plot_anat(
'/data/smoothflow/_fwhm_8/iso_smooth/sub-01_T1w_brain_smooth.nii.gz', title='FWHM=8',
display_mode='z', cut_coords=(-20, -10, 0, 10, 20), annotate=False)
plotting.plot_anat(
'/data/smoothflow/_fwhm_16/iso_smooth/sub-01_T1w_brain_smooth.nii.gz', title='FWHM=16',
display_mode='z', cut_coords=(-20, -10, 0, 10, 20), annotate=False)
Out[ ]:
IdentityInterface
(special use case of iterabels
)A special use case of iterables
is the IdentityInterface
. The IdentityInterface
interface allows you to create Nodes
that simple identity mapping, i.e. Nodes
that only work on parameters/strings.
For example, let's say you want to run a preprocessing workflow over 5 subjects, with each having two runs and applying 2 different smoothing kernel (as is done in the Preprocessing Example), we can do this as follows:
In [ ]:
# First, let's specify the list of input variables
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']
session_list = ['run-1', 'run-2']
fwhm_widths = [4, 8]
Now, we can create the IdentityInterface Node
In [ ]:
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'session_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('session_id', session_list),
('fwhm_id', fwhm_widths)]
That's it. Now, we can connect the output fields of this infosource
node like any other node to wherever we want.
In [ ]:
infosource.outputs
Out[ ]:
For example, like:
In [ ]:
workflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('session_id', 'session_id')]),
(infosource, smooth, [('fwhm_id', 'fwhm')])
])