Data Output

Similarly important to data input is data output. Using a data output module allows you to restructure and rename computed output and to spatial differentiate relevant output files from the temporary computed intermediate files in the working directory. Nipype provides the following modules to handle data stream output:

DataSink
JSONFileSink
MySQLSink
SQLiteSink
XNATSink

This tutorial covers only DataSink. For the rest, see the section interfaces.io on the official homepage.

Preparation

Before we can use DataSink we first need to run a workflow. For this purpose, let's create a very short preprocessing workflow that realigns and smooths one functional image of one subject.

First, let's create a SelectFiles node to . For an explanation about this step, see the Data Input tutorial.


In [ ]:
from nipype import SelectFiles, Node

# Create SelectFiles node
templates={'func': '{subject_id}/func/{subject_id}_task-flanker_run-1_bold.nii.gz'}
sf = Node(SelectFiles(templates),
          name='selectfiles')
sf.inputs.base_directory = '/data/ds102'
sf.inputs.subject_id = 'sub-01'

Second, let's create the motion correction and smoothing node. For an explanation about this step, see the Nodes and Interfaces tutorial.


In [ ]:
from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth

# Create Motion Correction Node
mcflirt = Node(MCFLIRT(mean_vol=True,
                       save_plots=True),
               name='mcflirt')

# Create Smoothing node
smooth = Node(IsotropicSmooth(fwhm=4),
              name='smooth')

Third, let's create the workflow that will contain those three nodes. For an explanation about this step, see the Workflow tutorial.


In [ ]:
from nipype import Workflow
from os.path import abspath

# Create a preprocessing workflow
wf = Workflow(name="preprocWF")
wf.base_dir = 'working_dir'

# Connect the three nodes to each other
wf.connect([(sf, mcflirt, [("func", "in_file")]),
            (mcflirt, smooth, [("out_file", "in_file")])])

Now that everything is set up, let's run the preprocessing workflow.


In [ ]:
wf.run()

After the execution of the workflow we have all the data hidden in the working directory 'working_dir'. Let's take a closer look at the content of this folder:

working_dir
└── preprocWF
    ├── d3.js
    ├── graph1.json
    ├── graph.json
    ├── index.html
    ├── mcflirt
    │   ├── _0x6148b774a1205e01fbc692453a68ee85.json
    │   ├── command.txt
    │   ├── _inputs.pklz
    │   ├── _node.pklz
    │   ├── _report
    │   │   └── report.rst
    │   ├── result_mcflirt.pklz
    │   └── sub-01_task-flanker_run-1_bold_mcf.nii.gz
    ├── selectfiles
    │   ├── _0x6a583c5c1c472209ca26f29f15c0bd38.json
    │   ├── _inputs.pklz
    │   ├── _node.pklz
    │   ├── _report
    │   │   └── report.rst
    │   └── result_selectfiles.pklz
    └── smooth
        ├── _0x553087282cd3b58a5c06b5f9699308bf.json
        ├── command.txt
        ├── _inputs.pklz
        ├── _node.pklz
        ├── _report
        │   └── report.rst
        ├── result_smooth.pklz
        └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz

As we can see, there is way too much content that we might not really care about. To relocate and rename all the files that are relevant for you, you can use DataSink?

DataSink

DataSink is Nipype's standard output module to restructure your output files. It allows you to relocate and rename files that you deem relevant.

Based on the preprocessing pipeline above, let's say we want to keep the smoothed functional images as well as the motion correction paramters. To do this, we first need to create the DataSink object.


In [ ]:
from nipype.interfaces.io import DataSink

# Create DataSink object
sinker = Node(DataSink(), name='sinker')

# Name of the output folder
sinker.inputs.base_directory = 'output'

# Connect DataSink with the relevant nodes
wf.connect([(smooth, sinker, [('out_file', 'in_file')]),
            (mcflirt, sinker, [('mean_img', 'mean_img'),
                               ('par_file', 'par_file')]),
            ])
wf.run()

Let's take a look at the output folder:

output
├── in_file
│   └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz
├── mean_img
│   └── sub-01_task-flanker_run-1_bold_mcf.nii.gz_mean_reg.nii.gz
└── par_file
    └── sub-01_task-flanker_run-1_bold_mcf.nii.gz.par

This looks nice. It is what we asked it to do. But having a specific output folder for each individual output file might be suboptimal. So let's change the code above to save the output in one folder, which we will call 'preproc'.

For this we can use the same code as above. We only have to change the connection part:


In [ ]:
wf.connect([(smooth, sinker, [('out_file', 'preproc.@in_file')]),
            (mcflirt, sinker, [('mean_img', 'preproc.@mean_img'),
                               ('par_file', 'preproc.@par_file')]),
            ])
wf.run()

Let's take a look at the new output folder structure:

output
└── preproc
    ├── sub-01_task-flanker_run-1_bold_mcf.nii.gz_mean_reg.nii.gz
    ├── sub-01_task-flanker_run-1_bold_mcf.nii.gz.par
    └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz

This is already much better. But what if you want to rename the output files to represent something a bit readable. For this DataSink has the substitution input field.

For example, let's assume we want to get rid of the string 'task-flanker' and 'bold_mcf' and that we want to rename the mean file, as well as adapt the file ending of the motion parameter file:


In [ ]:
# Define substitution strings
substitutions = [('_task-flanker', ''),
                 ('_bold_mcf', ''),
                 ('.nii.gz_mean_reg', '_mean'),
                 ('.nii.gz.par', '.par')]

# Feed the substitution strings to the DataSink node
sinker.inputs.substitutions = substitutions

# Run the workflow again with the substitutions in place
wf.run()

Now, let's take a final look at the output folder:

output
└── preproc
    ├── sub-01_run-1_mean.nii.gz
    ├── sub-01_run-1.par
    └── sub-01_run-1_smooth.nii.gz

Cool, much more clearly!