If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a MapNode
. A MapNode
is quite similar to a normal Node
, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. (The main homepage has a nice section about MapNode
and iterables
if you want to learn more).
Let's demonstrate this with a simple function interface:
In [ ]:
from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
We see that this function just takes a numeric input and returns its squared value.
In [ ]:
square.run(x=2).outputs.f_x
Out[ ]:
What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode
.
The MapNode
constructor has a field called iterfield
, which tells it what inputs should be expecting a list.
In [ ]:
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
In [ ]:
square_node.inputs.x = range(4)
square_node.run().outputs.f_x
Out[ ]:
Because iterfield
can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
In [ ]:
def power_func(x, y):
return x ** y
In [ ]:
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = range(4)
power_node.inputs.y = range(4)
print(power_node.run().outputs.f_xy)
But not every input needs to be an iterfield.
In [ ]:
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = range(4)
power_node.inputs.y = 3
print(power_node.run().outputs.f_xy)
As in the case of iterables
, each underlying MapNode
execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you processes large amounts of data efficiently and reproducibly.
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterwards, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:
In [ ]:
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype.pipeline.engine import Node, MapNode, Workflow
files = ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz',
'/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz']
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
If we try to specify the input for the Gunzip node with a simple Node, we get the following error:
In [ ]:
gunzip = Node(Gunzip(), name='gunzip',)
gunzip.inputs.in_file = files
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] <type 'list'> was specified.
But if we do it with a MapNode, it works:
In [ ]:
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
Now, we just have to create a workflow, connect the nodes and we can run it:
In [ ]:
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = '/data'
mcflow.run('MultiProc', plugin_args={'n_procs': 4})