The purpose of this section is that you set-up a complete fMRI analysis workflow yourself. So that in the end, you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the preprocessing part, and the section Hands-on 2: Analysis will handle the analysis part.
We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully, this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.
In [ ]:
%%bash
datalad get -J 4 -d /data/ds000114 \
/data/ds000114/sub-0[234789]/ses-test/anat/sub-0[234789]_ses-test_T1w.nii.gz \
/data/ds000114/sub-0[234789]/ses-test/func/*fingerfootlips*
So let's get our hands dirty. First things first, it's always good to know which interfaces you want to use in your workflow and in which order you want to execute them. For the preprocessing workflow, I recommend that we use the following nodes:
1. Gunzip (Nipype)
2. Drop Dummy Scans (FSL)
3. Slice Time Correction (SPM)
4. Motion Correction (SPM)
5. Artifact Detection
6. Segmentation (SPM)
7. Coregistration (FSL)
8. Smoothing (FSL)
9. Apply Binary Mask (FSL)
10. Remove Linear Trends (Nipype)
Note: This workflow might be overkill concerning data manipulation, but it hopefully serves as a good Nipype exercise.
In [ ]:
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-r7219/spm12_mcr/spm12')
Note: Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go.
Let's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline.
We recommend to create the workflow and establish all its connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example, it makes sense to establish the connections between the nodes as we go.
And for this, we first need to create a workflow:
In [ ]:
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
In [ ]:
preproc = Workflow(name='work_preproc', base_dir='/output/')
In [ ]:
from nipype.algorithms.misc import Gunzip
In [ ]:
# Specify example input file
func_file = '/data/ds000114/sub-07/ses-test/func/sub-07_ses-test_task-fingerfootlips_bold.nii.gz'
# Initiate Gunzip node
gunzip_func = Node(Gunzip(in_file=func_file), name='gunzip_func')
The functional images of this dataset were recorded with 4 dummy scans at the beginning (see the corresponding publication). But those dummy scans were not yet taken out from the functional images.
To better illustrate this, let's plot the time course of a random voxel of the just defined func_file
:
In [ ]:
import nibabel as nb
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(nb.load(func_file).get_fdata()[32, 32, 15, :]);
In the figure above, we see that at the very beginning there are extreme values, which hint to the fact that steady state wasn't reached yet. Therefore, we want to exclude the dummy scans from the original data. This can be achieved with FSL's ExtractROI
.
In [ ]:
from nipype.interfaces.fsl import ExtractROI
In [ ]:
extract = Node(ExtractROI(t_min=4, t_size=-1, output_type='NIFTI'),
name="extract")
This ExtractROI
node can now be connected to the gunzip_func
node from above. To do this, we use the following command:
In [ ]:
preproc.connect([(gunzip_func, extract, [('out_file', 'in_file')])])
Now to the next step. Let's us SPM's SliceTiming
to correct for slice wise acquisition of the volumes. As a reminder, the tutorial dataset was recorded...
TR-(TR/num_slices)
In [ ]:
from nipype.interfaces.spm import SliceTiming
In [ ]:
slice_order = list(range(1, 31, 2)) + list(range(2, 31, 2))
print(slice_order)
In [ ]:
# Initiate SliceTiming node here
In [ ]:
slicetime = Node(SliceTiming(num_slices=30,
ref_slice=15,
slice_order=slice_order,
time_repetition=2.5,
time_acquisition=2.5-(2.5/30)),
name='slicetime')
Now the next step is to connect the SliceTiming
node to the rest of the workflow, i.e. the ExtractROI
node.
In [ ]:
# Connect SliceTiming node to the other nodes here
In [ ]:
preproc.connect([(extract, slicetime, [('roi_file', 'in_files')])])
In [ ]:
from nipype.interfaces.fsl import MCFLIRT
In [ ]:
# Initate MCFLIRT node here
In [ ]:
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name="mcflirt")
Connect the MCFLIRT
node to the rest of the workflow.
In [ ]:
# Connect MCFLIRT node to the other nodes here
In [ ]:
preproc.connect([(slicetime, mcflirt, [('timecorrected_files', 'in_file')])])
In [ ]:
from nipype.algorithms.rapidart import ArtifactDetect
In [ ]:
art = Node(ArtifactDetect(norm_threshold=2,
zintensity_threshold=3,
mask_type='spm_global',
parameter_source='FSL',
use_differences=[True, False],
plot_type='svg'),
name="art")
The parameters above mean the following:
norm_threshold
- Threshold to use to detect motion-related outliers when composite motion is being usedzintensity_threshold
- Intensity Z-threshold use to detection images that deviate from the meanmask_type
- Type of mask that should be used to mask the functional data. spm_global uses an spm_global like calculation to determine the brain maskparameter_source
- Source of movement parametersuse_differences
- If you want to use differences between successive motion (first element) and intensity parameter (second element) estimates in order to determine outliersAnd this is how you connect this node to the rest of the workflow:
In [ ]:
preproc.connect([(mcflirt, art, [('out_file', 'realigned_files'),
('par_file', 'realignment_parameters')])
])
In [ ]:
from nipype.interfaces.spm import NewSegment
In [ ]:
# Use the following tissue specification to get a GM and WM probability map
tpm_img ='/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
tissue1 = ((tpm_img, 1), 1, (True,False), (False, False))
tissue2 = ((tpm_img, 2), 1, (True,False), (False, False))
tissue3 = ((tpm_img, 3), 2, (True,False), (False, False))
tissue4 = ((tpm_img, 4), 3, (False,False), (False, False))
tissue5 = ((tpm_img, 5), 4, (False,False), (False, False))
tissue6 = ((tpm_img, 6), 2, (False,False), (False, False))
tissues = [tissue1, tissue2, tissue3, tissue4, tissue5, tissue6]
In [ ]:
# Initiate NewSegment node here
In [ ]:
segment = Node(NewSegment(tissues=tissues), name='segment')
We will again be using a Gunzip
node to unzip the anatomical image that we then want to use as input to the segmentation node. We again also need to specify the anatomical image that we want to use in this case. As before, this will later also be handled directly by the Input/Output stream.
In [ ]:
# Specify example input file
anat_file = '/data/ds000114/sub-07/ses-test/anat/sub-07_ses-test_T1w.nii.gz'
# Initiate Gunzip node
gunzip_anat = Node(Gunzip(in_file=anat_file), name='gunzip_anat')
Now we can connect the NewSegment
node to the rest of the workflow.
In [ ]:
# Connect NewSegment node to the other nodes here
In [ ]:
preproc.connect([(gunzip_anat, segment, [('out_file', 'channel_files')])])
As a next step, we will make sure that the functional images are coregistered to the anatomical image. For this, we will use FSL's FLIRT
function. As we just created a white matter probability map, we can use this together with the Boundary-Based Registration (BBR) cost function to optimize the image coregistration. As some helpful notes...
bbr
schedule='/usr/share/fsl/5.0/etc/flirtsch/bbr.sch'
In [ ]:
from nipype.interfaces.fsl import FLIRT
In [ ]:
# Initiate FLIRT node here
In [ ]:
coreg = Node(FLIRT(dof=6,
cost='bbr',
schedule='/usr/share/fsl/5.0/etc/flirtsch/bbr.sch',
output_type='NIFTI'),
name="coreg")
In [ ]:
# Connect FLIRT node to the other nodes here
In [ ]:
preproc.connect([(gunzip_anat, coreg, [('out_file', 'reference')]),
(mcflirt, coreg, [('mean_img', 'in_file')])
])
As mentioned above, the bbr
routine can use the subject-specific white matter probability map to guide the coregistration. But for this, we need to create a binary mask out of the WM probability map. This can easily be done by FSL's Threshold
interface.
In [ ]:
from nipype.interfaces.fsl import Threshold
# Threshold - Threshold WM probability image
threshold_WM = Node(Threshold(thresh=0.5,
args='-bin',
output_type='NIFTI'),
name="threshold_WM")
Now, to select the WM probability map that the NewSegment
node created, we need some helper function. Because the output field partial_volume_files
form the segmentation node, will give us a list of files, i.e. [[GM_prob], [WM_prob], [], [], [], []]
. Therefore, using the following function, we can select only the last element of this list.
In [ ]:
# Select WM segmentation file from segmentation output
def get_wm(files):
return files[1][0]
# Connecting the segmentation node with the threshold node
preproc.connect([(segment, threshold_WM, [(('native_class_images', get_wm),
'in_file')])])
Now we can just connect this Threshold
node to the coregistration node from above.
In [ ]:
# Connect Threshold node to coregistration node above here
In [ ]:
preproc.connect([(threshold_WM, coreg, [('out_file', 'wm_seg')])])
In [ ]:
# Specify the isometric voxel resolution you want after coregistration
desired_voxel_iso = 4
# Apply coregistration warp to functional images
applywarp = Node(FLIRT(interp='spline',
apply_isoxfm=desired_voxel_iso,
output_type='NIFTI'),
name="applywarp")
Important: As you can see above, we also specified a variable desired_voxel_iso
. This is very important at this stage, otherwise FLIRT
will transform your functional images to a resolution of the anatomical image, which will dramatically increase the file size (e.g. to 1-10GB per file). If you don't want to change the voxel resolution, use the additional parameter no_resample=True
. Important, for this to work, you still need to define apply_isoxfm
.
In [ ]:
# Connecting the ApplyWarp node to all the other nodes
preproc.connect([(mcflirt, applywarp, [('out_file', 'in_file')]),
(coreg, applywarp, [('out_matrix_file', 'in_matrix_file')]),
(gunzip_anat, applywarp, [('out_file', 'reference')])
])
In [ ]:
from nipype.workflows.fmri.fsl.preprocess import create_susan_smooth
If you type create_susan_smooth?
you can see how to specify the input variables to the susan workflow. In particular, they are...
fwhm
: set this value to 4 (or whichever value you want)mask_file
: will be created in a later stepin_file
: will be handled while connection to other nodes in the preproc workflow
In [ ]:
# Initiate SUSAN workflow here
In [ ]:
susan = create_susan_smooth(name='susan')
susan.inputs.inputnode.fwhm = 4
In [ ]:
# Connect Threshold node to coregistration node above here
In [ ]:
preproc.connect([(applywarp, susan, [('out_file', 'inputnode.in_files')])])
There are many possible approaches on how you can mask your functional images. One of them is not at all, one is with a simple brain mask and one that only considers certain kind of brain tissue, e.g. gray matter.
For the current example, we want to create a dilated gray matter mask. For this purpose we need to:
The first step can be done in many ways (eg. using freesurfer's mri_convert
, nibabel
) but in our case, we will use FSL's FLIRT
. The trick is to use the probability mask, as input file and a reference file.
In [ ]:
from nipype.interfaces.fsl import FLIRT
# Initiate resample node
resample = Node(FLIRT(apply_isoxfm=desired_voxel_iso,
output_type='NIFTI'),
name="resample")
The second and third step can luckily be done with just one node. We can take almost the same Threshold
node as above. We just need to add another additional argument: -dilF
- which applies a maximum filtering of all voxels.
In [ ]:
from nipype.interfaces.fsl import Threshold
# Threshold - Threshold GM probability image
mask_GM = Node(Threshold(thresh=0.5,
args='-bin -dilF',
output_type='NIFTI'),
name="mask_GM")
# Select GM segmentation file from segmentation output
def get_gm(files):
return files[0][0]
Now we can connect the resample and the gray matter mask node to the segmentation node and each other.
In [ ]:
preproc.connect([(segment, resample, [(('native_class_images', get_gm), 'in_file'),
(('native_class_images', get_gm), 'reference')
]),
(resample, mask_GM, [('out_file', 'in_file')])
])
This should do the trick.
In [ ]:
# Connect gray matter Mask node to the susan workflow here
In [ ]:
preproc.connect([(mask_GM, susan, [('out_file', 'inputnode.mask_file')])])
To apply the mask to the smoothed functional images, we will use FSL's ApplyMask
interface.
In [ ]:
from nipype.interfaces.fsl import ApplyMask
Important: The susan workflow gives out a list of files, i.e. [smoothed_func.nii]
instead of just the filename directly. If we would use a normal Node
for ApplyMask
this would lead to the following error:
TraitError: The 'in_file' trait of an ApplyMaskInput instance must be an existing file name, but a value of ['/output/work_preproc/susan/smooth/mapflow/_smooth0/asub-07_ses-test_task-fingerfootlips_bold_mcf_flirt_smooth.nii.gz'] <class 'list'> was specified.
To prevent this we will be using a MapNode
and specify the in_file
as it's iterfield. Like this, the node is capable to handle a list of inputs as it will know that it has to apply itself iteratively to the list of inputs.
In [ ]:
from nipype import MapNode
In [ ]:
# Initiate ApplyMask node here
In [ ]:
mask_func = MapNode(ApplyMask(output_type='NIFTI'),
name="mask_func",
iterfield=["in_file"])
In [ ]:
# Connect smoothed susan output file to ApplyMask node here
In [ ]:
preproc.connect([(susan, mask_func, [('outputnode.smoothed_files', 'in_file')]),
(mask_GM, mask_func, [('out_file', 'mask_file')])
])
In [ ]:
from nipype.algorithms.confounds import TSNR
In [ ]:
# Initiate TSNR node here
In [ ]:
detrend = Node(TSNR(regress_poly=2), name="detrend")
In [ ]:
# Connect the detrend node to the other nodes here
In [ ]:
preproc.connect([(mask_func, detrend, [('out_file', 'in_file')])])
SelectFiles
and iterables
This is all nice and well. But so far we still had to specify the input values for gunzip_anat
and gunzip_func
ourselves. How can we scale this up to multiple subjects and/or multiple functional images and make the workflow take the input directly from the BIDS dataset?
For this, we need SelectFiles
and iterables
! It's rather simple, specify a template and fill-up the placeholder variables.
In [ ]:
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': 'sub-{subject_id}/ses-{ses_id}/anat/'
'sub-{subject_id}_ses-test_T1w.nii.gz',
'func': 'sub-{subject_id}/ses-{ses_id}/func/'
'sub-{subject_id}_ses-{ses_id}_task-{task_id}_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates,
base_directory='/data/ds000114',
sort_filelist=True),
name='selectfiles')
sf.inputs.ses_id='test'
sf.inputs.task_id='fingerfootlips'
Now we can specify over which subjects the workflow should iterate. To test the workflow, let's still just look at subject 7.
In [ ]:
subject_list = ['07']
sf.iterables = [('subject_id', subject_list)]
In [ ]:
# Connect SelectFiles node to the other nodes here
In [ ]:
preproc.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')])])
In [ ]:
# Create preproc output graph
preproc.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_preproc/graph.png', width=750)
Now we are ready to run the workflow! Be careful about the n_procs
parameter if you run a workflow in 'MultiProc'
mode. n_procs
specifies the number of jobs/cores your computer will use to run the workflow. If this number is too high your computer will try to execute too many things at once and will most likely crash.
Note: If you're using a Docker container and FLIRT fails to run without any good reason, you might need to change memory settings in the Docker preferences (6 GB should be enough for this workflow).
In [ ]:
preproc.run('MultiProc', plugin_args={'n_procs': 4})
In [ ]:
!tree /output/work_preproc/ -I '*js|*json|*pklz|_report|*dot|*html|*txt|*.m'
But what did we do specifically? Well, let's investigate.
In [ ]:
%matplotlib inline
In [ ]:
# Plot the motion paramters
import numpy as np
import matplotlib.pyplot as plt
par = np.loadtxt('/output/work_preproc/_subject_id_07/mcflirt/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.nii.gz.par')
fig, axes = plt.subplots(2, 1, figsize=(15, 5))
axes[0].set_ylabel('rotation (radians)')
axes[0].plot(par[0:, :3])
axes[1].plot(par[0:, 3:])
axes[1].set_xlabel('time (TR)')
axes[1].set_ylabel('translation (mm)');
The motion parameters seems to look ok. What about the detection of artifacts?
In [ ]:
# Showing the artifact detection output
from IPython.display import SVG
SVG(filename='/output/work_preproc/_subject_id_07/art/'
'plot.asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.svg')
Which volumes are problematic?
In [ ]:
outliers = np.loadtxt('/output/work_preproc/_subject_id_07/art/'
'art.asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_outliers.txt')
list(outliers.astype('int'))
In [ ]:
from nilearn import image as nli
from nilearn.plotting import plot_stat_map
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
First, let's look at the tissue probability maps.
In [ ]:
anat = output + 'gunzip_anat/sub-07_ses-test_T1w.nii'
In [ ]:
plot_stat_map(
output + 'segment/c1sub-07_ses-test_T1w.nii', title='GM prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
In [ ]:
plot_stat_map(
output + 'segment/c2sub-07_ses-test_T1w.nii', title='WM prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
In [ ]:
plot_stat_map(
output + 'segment/c3sub-07_ses-test_T1w.nii', title='CSF prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
And how does the gray matter mask look like that we used on the functional images?
In [ ]:
plot_stat_map(
output + 'mask_GM/c1sub-07_ses-test_T1w_flirt_thresh.nii', title='dilated GM Mask', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
In [ ]:
from nilearn import image as nli
from nilearn.plotting import plot_epi
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
In [ ]:
plot_epi(output + 'mcflirt/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.nii.gz_mean_reg.nii.gz',
title='Motion Corrected mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
In [ ]:
mean = nli.mean_img(output + 'applywarp/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt.nii')
plot_epi(mean, title='Coregistred mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
In [ ]:
mean = nli.mean_img('/output/work_preproc/susan/_subject_id_07/smooth/mapflow/_smooth0/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth.nii.gz')
plot_epi(mean, title='Smoothed mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
In [ ]:
mean = nli.mean_img(output + 'mask_func/mapflow/_mask_func0/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth_masked.nii')
plot_epi(mean, title='Masked mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
In [ ]:
plot_epi(output + 'detrend/mean.nii.gz', title='Detrended mean image', display_mode='z',
cut_coords=range(-40, 21, 15), cmap=plt.cm.viridis);
That's all nice and beautiful, but what did smoothing and detrending actually do to the data?
In [ ]:
import nibabel as nb
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
# Load the relevant datasets
mc = nb.load(output + 'applywarp/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt.nii')
smooth = nb.load('/output/work_preproc/susan/_subject_id_07/smooth/mapflow/'
'_smooth0/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth.nii.gz')
detrended_data = nb.load(output + 'detrend/detrend.nii.gz')
# Plot a representative voxel
x, y, z = 32, 34, 43
fig = plt.figure(figsize=(12, 4))
plt.plot(mc.get_data()[x, y, z, :])
plt.plot(smooth.get_data()[x, y, z, :])
plt.plot(detrended_data.get_data()[x, y, z, :])
plt.legend(['motion corrected', 'smoothed', 'detrended']);
In [ ]:
from nipype.interfaces.io import DataSink
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
Now the next step is to specify all the output that we want to keep in our output folder output
. Make sure to keep:
In [ ]:
# Connect nodes to datasink here
In [ ]:
preproc.connect([(art, datasink, [('outlier_files', 'preproc.@outlier_files'),
('plot_files', 'preproc.@plot_files')]),
(mcflirt, datasink, [('par_file', 'preproc.@par')]),
(detrend, datasink, [('detrended_file', 'preproc.@func')]),
])
In [ ]:
preproc.run('MultiProc', plugin_args={'n_procs': 4})
Let's look now at the output of this datasink folder.
In [ ]:
!tree /output/datasink_handson -I '*js|*json|*pklz|_report|*dot|*html|*txt|*.m'
Much better! But we're still not there yet. There are many unnecessary file specifiers that we can get rid off. To do so, we can use DataSink
's substitutions
parameter. For this, we create a list of tuples: on the left, we specify the string that we want to replace and on the right, with what we want to replace it with.
In [ ]:
## Use the following substitutions for the DataSink output
substitutions = [('asub', 'sub'),
('_ses-test_task-fingerfootlips_bold_roi_mcf', ''),
('.nii.gz.par', '.par'),
]
# To get rid of the folder '_subject_id_07' and renaming detrend
substitutions += [('_subject_id_%s/detrend' % s,
'_subject_id_%s/sub-%s_detrend' % (s, s)) for s in subject_list]
substitutions += [('_subject_id_%s/' % s, '') for s in subject_list]
datasink.inputs.substitutions = substitutions
Before we run the preprocessing workflow again, let's first delete the current output folder:
In [ ]:
# Delets the current output folder
!rm -rf /output/datasink_handson
In [ ]:
# Runs the preprocessing workflow again, this time with substitutions
preproc.run('MultiProc', plugin_args={'n_procs': 4})
In [ ]:
!tree /output/datasink_handson -I '*js|*json|*pklz|_report|*dot|*html|*.m'
Perfect! Now let's run the whole workflow for right-handed subjects. For this, you just need to change the subject_list
variable and run again the places where this variable is used (i.e. sf.iterables
and in DataSink
substitutions
.
In [ ]:
# Update 'subject_list' and its dependencies here
In [ ]:
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subject_id', subject_list)]
In [ ]:
# To get rid of the folder '_subject_id_02' and renaming detrend
substitutions += [('_subject_id_%s/detrend' % s,
'_subject_id_%s/sub-%s_detrend' % (s, s)) for s in subject_list]
substitutions += [('_subject_id_%s/' % s, '') for s in subject_list]
datasink.inputs.substitutions = substitutions
Now we can run the workflow again, this time for all right handed subjects in parallel.
In [ ]:
# Runs the preprocessing workflow again, this time with substitutions
preproc.run('MultiProc', plugin_args={'n_procs': 4})
Now we're ready for the next section Hands-on 2: How to create a fMRI analysis workflow!