MapNode

If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. (The main homepage has a nice section about MapNode and iterables if you want to learn more).

Let's demonstrate this with a simple function interface:


In [ ]:
from nipype import Function
def square_func(x):
    return x ** 2
square = Function(["x"], ["f_x"], square_func)

We see that this function just takes a numeric input and returns its squared value.


In [ ]:
square.run(x=2).outputs.f_x


Out[ ]:
4

What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode.

The MapNode constructor has a field called iterfield, which tells it what inputs should be expecting a list.


In [ ]:
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])

In [ ]:
square_node.inputs.x = [0, 1, 2, 3]
square_node.run().outputs.f_x


170904-05:47:03,281 workflow INFO:
	 Executing node square in dir: /tmp/tmpuynl2xyq/square
170904-05:47:03,294 workflow INFO:
	 Executing node _square0 in dir: /tmp/tmpuynl2xyq/square/mapflow/_square0
170904-05:47:03,309 workflow INFO:
	 Executing node _square1 in dir: /tmp/tmpuynl2xyq/square/mapflow/_square1
170904-05:47:03,317 workflow INFO:
	 Executing node _square2 in dir: /tmp/tmpuynl2xyq/square/mapflow/_square2
170904-05:47:03,324 workflow INFO:
	 Executing node _square3 in dir: /tmp/tmpuynl2xyq/square/mapflow/_square3
Out[ ]:
[0, 1, 4, 9]

Because iterfield can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.


In [ ]:
def power_func(x, y):
    return x ** y

In [ ]:
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
print(power_node.run().outputs.f_xy)


170904-05:47:03,771 workflow INFO:
	 Executing node power in dir: /tmp/tmp7_h6xuee/power
170904-05:47:03,780 workflow INFO:
	 Executing node _power0 in dir: /tmp/tmp7_h6xuee/power/mapflow/_power0
170904-05:47:03,790 workflow INFO:
	 Executing node _power1 in dir: /tmp/tmp7_h6xuee/power/mapflow/_power1
170904-05:47:03,799 workflow INFO:
	 Executing node _power2 in dir: /tmp/tmp7_h6xuee/power/mapflow/_power2
170904-05:47:03,806 workflow INFO:
	 Executing node _power3 in dir: /tmp/tmp7_h6xuee/power/mapflow/_power3
[1, 1, 4, 27]

But not every input needs to be an iterfield.


In [ ]:
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
print(power_node.run().outputs.f_xy)


170904-05:47:04,103 workflow INFO:
	 Executing node power in dir: /tmp/tmplaunq_yj/power
170904-05:47:04,113 workflow INFO:
	 Executing node _power0 in dir: /tmp/tmplaunq_yj/power/mapflow/_power0
170904-05:47:04,122 workflow INFO:
	 Executing node _power1 in dir: /tmp/tmplaunq_yj/power/mapflow/_power1
170904-05:47:04,132 workflow INFO:
	 Executing node _power2 in dir: /tmp/tmplaunq_yj/power/mapflow/_power2
170904-05:47:04,139 workflow INFO:
	 Executing node _power3 in dir: /tmp/tmplaunq_yj/power/mapflow/_power3
[0, 1, 8, 27]

As in the case of iterables, each underlying MapNode execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you processes large amounts of data efficiently and reproducibly.

Why is this important?

Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterwards, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.

Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:

  • Gunzip, to unzip the files (plural)
  • Realign, to do the motion correction

In [ ]:
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype.pipeline.engine import Node, MapNode, Workflow

files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',
         '/data/ds000114/sub-02/ses-test/func/sub-02_ses-test_task-fingerfootlips_bold.nii.gz']

realign = Node(Realign(register_to_mean=True),
               name='motion_correction')

If we try to specify the input for the Gunzip node with a simple Node, we get the following error:


In [ ]:
gunzip = Node(Gunzip(), name='gunzip',)
gunzip.inputs.in_file = files


---------------------------------------------------------------------------
TraitError                                Traceback (most recent call last)
<ipython-input-22-4c705ede4d1a> in <module>()
      1 gunzip = Node(Gunzip(), name='gunzip',)
----> 2 gunzip.inputs.in_file = files

/opt/conda/envs/neuro3/lib/python3.6/site-packages/nipype/interfaces/traits_extension.py in validate(self, object, name, value)
     81             Note: The 'fast validator' version performs this check in C.
     82         """
---> 83         validated_value = super(BaseFile, self).validate(object, name, value)
     84         if not self.exists:
     85             return validated_value

/opt/conda/envs/neuro3/lib/python3.6/site-packages/traits/trait_types.py in validate(self, object, name, value)
    409             return str( value )
    410 
--> 411         self.error( object, name, value )
    412 
    413     def create_editor ( self ):

/opt/conda/envs/neuro3/lib/python3.6/site-packages/traits/trait_handlers.py in error(self, object, name, value)
    170         """
    171         raise TraitError( object, name, self.full_info( object, name, value ),
--> 172                           value )
    173 
    174     def full_info ( self, object, name, value ):

TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', '/data/ds000114/sub-02/ses-test/func/sub-02_ses-test_task-fingerfootlips_bold.nii.gz'] <class 'list'> was specified.
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] <type 'list'> was specified.

But if we do it with a MapNode, it works:


In [ ]:
gunzip = MapNode(Gunzip(), name='gunzip',
                 iterfield=['in_file'])
gunzip.inputs.in_file = files

Now, we just have to create a workflow, connect the nodes and we can run it:


In [ ]:
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = '/output'
mcflow.run('MultiProc', plugin_args={'n_procs': 4})


170904-05:47:23,396 workflow INFO:
	 Workflow realign_with_spm settings: ['check', 'execution', 'logging']
170904-05:47:23,401 workflow INFO:
	 Running in parallel.
170904-05:47:23,405 workflow INFO:
	 Executing: gunzip ID: 0
170904-05:47:23,408 workflow INFO:
	 Adding 2 jobs for mapnode gunzip
170904-05:47:23,414 workflow INFO:
	 Executing: _gunzip0 ID: 2
170904-05:47:23,416 workflow INFO:
	 Executing: _gunzip1 ID: 3
170904-05:47:23,417 workflow INFO:
	 Executing node _gunzip0 in dir: /output/realign_with_spm/gunzip/mapflow/_gunzip0170904-05:47:23,419 workflow INFO:
	 Executing node _gunzip1 in dir: /output/realign_with_spm/gunzip/mapflow/_gunzip1

170904-05:47:24,103 workflow INFO:
	 [Job finished] jobname: _gunzip0 jobid: 2
170904-05:47:24,113 workflow INFO:
	 [Job finished] jobname: _gunzip1 jobid: 3
170904-05:47:24,115 workflow INFO:
	 Executing: gunzip ID: 0
170904-05:47:24,119 workflow INFO:
	 Executing node gunzip in dir: /output/realign_with_spm/gunzip
170904-05:47:24,126 workflow INFO:
	 Executing node _gunzip0 in dir: /output/realign_with_spm/gunzip/mapflow/_gunzip0
170904-05:47:24,129 workflow INFO:
	 Collecting precomputed outputs
170904-05:47:24,136 workflow INFO:
	 Executing node _gunzip1 in dir: /output/realign_with_spm/gunzip/mapflow/_gunzip1
170904-05:47:24,139 workflow INFO:
	 Collecting precomputed outputs
170904-05:47:24,144 workflow INFO:
	 [Job finished] jobname: gunzip jobid: 0
170904-05:47:24,147 workflow INFO:
	 Executing: motion_correction ID: 1
170904-05:47:24,153 workflow INFO:
	 Executing node motion_correction in dir: /output/realign_with_spm/motion_correction
170904-05:49:01,535 workflow INFO:
	 [Job finished] jobname: motion_correction jobid: 1
Out[ ]:
<networkx.classes.digraph.DiGraph at 0x7f11c340fbe0>