Example 3: Normalize data to MNI template

This example covers the normalization of data. Some people prefer to normalize the data during the preprocessing, just before smoothing. I prefer to do the 1st-level analysis completely in subject space and only normalize the contrasts for the 2nd-level analysis.

For the current example, we will take the computed 1st-level contrasts from the previous experiment (again once done with fwhm=4mm and fwhm=8mm) and normalize them into MNI-space. To show two different approaches, we will do the normalization once with SPM and once with ANTs.

Normalization with ANTs

The normalization with ANTs requires that you first compute the transformation matrix that would bring the anatomical images of each subject into template space. Depending on your system this might take a few hours per subject. To facilitate this step, I've already computed the transformation matrix.

The data for it can be found under:


In [ ]:
!tree /data/antsdir/sub-0*/


/data/antsdir/sub-01/
├── transformComposite.h5
└── transform_Warped.nii.gz
/data/antsdir/sub-02/
├── transformComposite.h5
└── transform_Warped.nii.gz
/data/antsdir/sub-03/
├── transformComposite.h5
└── transform_Warped.nii.gz
/data/antsdir/sub-04/
├── transformComposite.h5
└── transform_Warped.nii.gz
/data/antsdir/sub-05/
├── transformComposite.h5
└── transform_Warped.nii.gz

0 directories, 10 files

And the script to compute those files can be found here:


In [ ]:
%load /data/antsdir/script_ANTS_registration.py

Now let's start with the ANTs normalization workflow!

Imports

First, we need to import all modules we later want to use.


In [ ]:
from os.path import join as opj
from nipype.interfaces.ants import ApplyTransforms
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.pipeline.engine import Workflow, Node, MapNode
from nipype.interfaces.fsl import Info

Experiment parameters

It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.


In [ ]:
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'

# list of subject identifiers
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']

# list of session identifiers
session_list = ['run-1', 'run-2']

# Smoothing widths used during preprocessing
fwhm = [4, 8]

# Template to normalize to
template = Info.standard_image('MNI152_T1_2mm.nii.gz')

Specify Nodes

Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.


In [ ]:
# Apply Transformation - applies the normalization matrix to contrast images
apply2con = MapNode(ApplyTransforms(args='--float',
                                    input_image_type=3,
                                    interpolation='Linear',
                                    invert_transform_flags=[False],
                                    num_threads=1,
                                    reference_image=template,
                                    terminal_output='file'),
                    name='apply2con', iterfield=['input_image'])

Specify input & output stream

Specify where the input data can be found & where and how to save the output data.


In [ ]:
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
                  name="infosource")
infosource.iterables = [('subject_id', subject_list),
                        ('fwhm_id', fwhm)]

# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
                        '{subject_id}_fwhm{fwhm_id}', '???_00??.nii'),
             'transform': opj('../data', 'antsdir', '{subject_id}',
                              'transformComposite.h5')}
selectfiles = Node(SelectFiles(templates,
                               base_directory=experiment_dir,
                               sort_filelist=True),
                   name="selectfiles")

# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
                         container=output_dir),
                name="datasink")

# Use the following DataSink output substitutions
substitutions = [('_subject_id_', '')]
subjFolders = [('_fwhm_id_%s%s' % (f, sub), '%s_fwhm%s' % (sub, f))
               for f in fwhm
               for sub in subject_list]
subjFolders += [('_apply2con%s/' % (i), '') for i in range(7)]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions

Specify Workflow

Create a workflow and connect the interface nodes and the I/O stream to each other.


In [ ]:
# Initiation of the ANTs normalization workflow
antsflow = Workflow(name='antsflow')
antsflow.base_dir = opj(experiment_dir, working_dir)

# Connect up the ANTs normalization components
antsflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
                                             ('fwhm_id', 'fwhm_id')]),
                  (selectfiles, apply2con, [('con', 'input_image'),
                                            ('transform', 'transforms')]),
                  (apply2con, datasink, [('output_image', 'norm_ants.@con')]),
                  ])

Visualize the workflow

It always helps to visualize your workflow.


In [ ]:
# Create ANTs normalization graph
antsflow.write_graph(graph2use='colored', format='png', simple_form=True)

# Visualize the graph
from IPython.display import Image
Image(filename=opj(antsflow.base_dir, 'antsflow', 'graph.dot.png'))


170301-21:36:54,735 workflow INFO:
	 Converting dotfile: /output/workingdir/antsflow/graph.dot to png format
Out[ ]:

Run the Workflow

Now that everything is ready, we can run the ANTs normalization workflow. Change n_procs to the number of jobs/cores you want to use.


In [ ]:
antsflow.run('MultiProc', plugin_args={'n_procs': 4})

Normalization with SPM12

The normalization with SPM12 is rather straight forward. The only thing we need to do is run the Normalize12 module. So let's start!

Imports

First, we need to import all modules we later want to use.


In [ ]:
from os.path import join as opj
from nipype.interfaces.spm import Normalize12
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.algorithms.misc import Gunzip
from nipype.pipeline.engine import Workflow, Node

Experiment parameters

It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.


In [ ]:
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'

# list of subject identifiers
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']

# list of session identifiers
session_list = ['run-1', 'run-2']

# Smoothing withds used during preprocessing
fwhm = [4, 8]

# Template to normalize to
template = '/opt/spm12/spm12_mcr/spm12/tpm/TPM.nii'

Specify Nodes

Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.


In [ ]:
# Gunzip - unzip the contrast image
gunzip = Node(Gunzip(), name="gunzip")

# Normalize - normalizes functional and structural images to the MNI template
normalize = Node(Normalize12(jobtype='estwrite',
                             tpm=template,
                             write_voxel_sizes=[2, 2, 2]),
                 name="normalize")

Specify input & output stream

Specify where the input data can be found & where and how to save the output data.


In [ ]:
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
                  name="infosource")
infosource.iterables = [('subject_id', subject_list),
                        ('fwhm_id', fwhm)]

# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
                        '{subject_id}_fwhm{fwhm_id}', '???_00??.nii'),
             'anat': opj('../data', 'ds102', '{subject_id}', 'anat',
                         '{subject_id}_T1w.nii.gz')}

selectfiles = Node(SelectFiles(templates,
                               base_directory=experiment_dir,
                               sort_filelist=True),
                   name="selectfiles")

# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
                         container=output_dir),
                name="datasink")

# Use the following DataSink output substitutions
substitutions = [('_subject_id_', '')]
subjFolders = [('_fwhm_id_%s%s' % (f, sub), '%s_fwhm%s' % (sub, f))
               for f in fwhm
               for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions

Specify Workflow

Create a workflow and connect the interface nodes and the I/O stream to each other.


In [ ]:
# Specify Normalization-Workflow & Connect Nodes
spmflow = Workflow(name='spmflow')
spmflow.base_dir = opj(experiment_dir, working_dir)

# Connect up SPM normalization components
spmflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
                                            ('fwhm_id', 'fwhm_id')]),
                 (selectfiles, normalize, [('con', 'apply_to_files')]),
                 (selectfiles, gunzip, [('anat', 'in_file')]),
                 (gunzip, normalize, [('out_file', 'image_to_align')]),
                 (normalize, datasink, [('normalized_files', 'norm_spm.@files'),
                                        ('normalized_image', 'norm_spm.@image'),
                                        ]),
                 ])

Visualize the workflow

It always helps to visualize your workflow.


In [ ]:
# Create SPM normalization graph
spmflow.write_graph(graph2use='colored', format='png', simple_form=True)

# Visualize the graph
from IPython.display import Image
Image(filename=opj(spmflow.base_dir, 'spmflow', 'graph.dot.png'))


170301-21:37:27,337 workflow INFO:
	 Converting dotfile: /output/workingdir/spmflow/graph.dot to png format
Out[ ]:

Run the Workflow

Now that everything is ready, we can run the SPM normalization workflow. Change n_procs to the number of jobs/cores you want to use.


In [ ]:
spmflow.run('MultiProc', plugin_args={'n_procs': 4})

Comparison between ANTs and SPM normalization

Now that we ran the normalization with ANTs and SPM, let us compare their output.


In [ ]:
%pylab inline
from nilearn.plotting import plot_stat_map
anatimg = '/usr/share/fsl/data/standard/MNI152_T1_2mm.nii.gz'


Populating the interactive namespace from numpy and matplotlib

First, let's compare the normalization of the anatomical images:


In [ ]:
plot_stat_map(
    '/data/antsdir/sub-01/transform_Warped.nii.gz', title='anatomy - ANTs',
    bg_img=anatimg, threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10))
plot_stat_map(
    '/output/datasink/norm_spm/sub-01_fwhm4/wsub-01_T1w.nii', title='anatomy - SPM',
    bg_img=anatimg, threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10))


Out[ ]:
<nilearn.plotting.displays.OrthoSlicer at 0x7fdf28f5e290>

And what about the contrast images?


In [ ]:
plot_stat_map(
    '/output/datasink/norm_ants/sub-01_fwhm8/con_0001_trans.nii', title='contrast1 - fwhm=8 - ANTs',
    bg_img=anatimg, threshold=3, display_mode='ortho', cut_coords=(-50, 0, -10))
plot_stat_map(
    '/output/datasink/norm_spm/sub-01_fwhm8/wcon_0001.nii', title='contrast1 - fwhm=8 - SPM',
    bg_img=anatimg, threshold=3, display_mode='ortho', cut_coords=(-50, 0, -10))


Out[ ]:
<nilearn.plotting.displays.OrthoSlicer at 0x7fdf2b0f55d0>