Example 2: 1st-level Analysis

In this example we will take the preprocessed output from the first example and run for each subject a 1st-level analysis. For this we need to do the following steps:

  1. Extract onset times of stimuli from TVA file
  2. Specify the model (TR, high pass filter, onset times, etc.)
  3. Specify contrasts to compute
  4. Estimate contrasts

In the previous example, we used two different smoothing kernels of fwhm=4 and fwhm=8. Therefore, let us also run the 1st-level analysis for those two versions.

So, let's begin!

Imports

First, we need to import all modules we later want to use.


In [ ]:
%matplotlib inline
from os.path import join as opj
import json
from nipype.interfaces.spm import Level1Design, EstimateModel, EstimateContrast
from nipype.algorithms.modelgen import SpecifySPMModel
from nipype.interfaces.utility import Function, IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.pipeline.engine import Workflow, Node

Experiment parameters

It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.


In [ ]:
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'

# list of subject identifiers
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05',
                'sub-06', 'sub-07', 'sub-08', 'sub-09', 'sub-10']

# TR of functional images
with open('/data/ds000114/task-fingerfootlips_bold.json', 'rt') as fp:
    task_info = json.load(fp)
TR = task_info['RepetitionTime']

# Smoothing withds used during preprocessing
fwhm = [4, 8]

Specify Nodes

Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.


In [ ]:
# SpecifyModel - Generates SPM-specific Model
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
                                 input_units='secs',
                                 output_units='secs',
                                 time_repetition=TR,
                                 high_pass_filter_cutoff=128),
                 name="modelspec")

# Level1Design - Generates an SPM design matrix
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
                                 timing_units='secs',
                                 interscan_interval=TR,
                                 model_serial_correlations='FAST'),
                    name="level1design")

# EstimateModel - estimate the parameters of the model
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
                      name="level1estimate")

# EstimateContrast - estimates contrasts
level1conest = Node(EstimateContrast(), name="level1conest")

Specify GLM contrasts

To do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the fingerfootlips task in this dataset:

  • finger
  • foot
  • lips

Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):


In [ ]:
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']

# Contrasts
cont01 = ['average',        'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger',         'T', condition_names, [1, 0, 0]]
cont03 = ['Foot',           'T', condition_names, [0, 1, 0]]
cont04 = ['Lips',           'T', condition_names, [0, 0, 1]]
cont05 = ['Finger > others','T', condition_names, [1, -0.5, -0.5]]
cont06 = ['Foot > others',  'T', condition_names, [-0.5, 1, -0.5]]
cont07 = ['Lips > others',  'T', condition_names, [-0.5, -0.5, 1]]

cont08 = ['activation',     'F', [cont02, cont03, cont04]]
cont09 = ['differences',    'F', [cont05, cont06, cont07]]

contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]

Specify GLM Model

The next step is now to get information such as stimuli onset, duration and other regressors into the GLM model. For this we need to create a helper function, in our case called subjectinfo.

To recap, let's see what we have in the TSV file for each run:


In [ ]:
!cat /data/ds000114/task-fingerfootlips_events.tsv


onset	duration	weight	trial_type
10	15.0		1	Finger
40	15.0		1	Foot
70	15.0		1	Lips
100	15.0		1	Finger
130	15.0		1	Foot
160	15.0		1	Lips
190	15.0		1	Finger
220	15.0		1	Foot
250	15.0		1	Lips
280	15.0		1	Finger
310	15.0		1	Foot
340	15.0		1	Lips
370	15.0		1	Finger
400	15.0		1	Foot
430	15.0		1	Lips

We can also create a data frame using pandas library.


In [ ]:
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo


Out[ ]:
onset duration weight trial_type
0 10 15.0 1 Finger
1 40 15.0 1 Foot
2 70 15.0 1 Lips
3 100 15.0 1 Finger
4 130 15.0 1 Foot
5 160 15.0 1 Lips
6 190 15.0 1 Finger
7 220 15.0 1 Foot
8 250 15.0 1 Lips
9 280 15.0 1 Finger
10 310 15.0 1 Foot
11 340 15.0 1 Lips
12 370 15.0 1 Finger
13 400 15.0 1 Foot
14 430 15.0 1 Lips

And finally we need to separate the onsets of the three conditions, i.e. group by trial_type. This can be done as follows:


In [ ]:
for group in trialinfo.groupby('trial_type'):
    print(group)
    print("")


('Finger',     onset  duration  weight trial_type
0      10      15.0       1     Finger
3     100      15.0       1     Finger
6     190      15.0       1     Finger
9     280      15.0       1     Finger
12    370      15.0       1     Finger)

('Foot',     onset  duration  weight trial_type
1      40      15.0       1       Foot
4     130      15.0       1       Foot
7     220      15.0       1       Foot
10    310      15.0       1       Foot
13    400      15.0       1       Foot)

('Lips',     onset  duration  weight trial_type
2      70      15.0       1       Lips
5     160      15.0       1       Lips
8     250      15.0       1       Lips
11    340      15.0       1       Lips
14    430      15.0       1       Lips)

Now, let us incorporate all this in the helper function subjectinfo.


In [ ]:
def subjectinfo(subject_id):

    import pandas as pd
    from nipype.interfaces.base import Bunch
    
    trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
    trialinfo.head()
    conditions = []
    onsets = []
    durations = []

    for group in trialinfo.groupby('trial_type'):
        conditions.append(group[0])
        onsets.append(list(group[1].onset - 10)) # subtracting 10s due to removing of 4 dummy scans
        durations.append(group[1].duration.tolist())

    subject_info = [Bunch(conditions=conditions,
                          onsets=onsets,
                          durations=durations,
                          #amplitudes=None,
                          #tmod=None,
                          #pmod=None,
                          #regressor_names=None,
                          #regressors=None
                         )]

    return subject_info  # this output will later be returned to infosource

# Get Subject Info - get subject specific condition information
getsubjectinfo = Node(Function(input_names=['subject_id'],
                               output_names=['subject_info'],
                               function=subjectinfo),
                      name='getsubjectinfo')

Specify input & output stream

Specify where the input data can be found & where and how to save the output data.


In [ ]:
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id',
                                            'fwhm_id',
                                            'contrasts'],
                                    contrasts=contrast_list),
                  name="infosource")
infosource.iterables = [('subject_id', subject_list),
                        ('fwhm_id', fwhm)]

# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'func': opj(output_dir, 'preproc', '{subject_id}', 'task-{task_id}',
                         'fwhm-{fwhm_id}_s{subject_id}_ses-test_task-{task_id}_bold.nii'),
             'mc_param': opj(output_dir, 'preproc', '{subject_id}', 'task-{task_id}',
                             '{subject_id}_ses-test_task-{task_id}_bold.par'),
             'outliers': opj(output_dir, 'preproc', '{subject_id}', 'task-{task_id}', 
                             'art.{subject_id}_ses-test_task-{task_id}_bold_outliers.txt')}
selectfiles = Node(SelectFiles(templates,
                               base_directory=experiment_dir,
                               sort_filelist=True),
                   name="selectfiles")
selectfiles.inputs.task_id = 'fingerfootlips'

# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
                         container=output_dir),
                name="datasink")

# Use the following DataSink output substitutions
substitutions = [('_subject_id_', '')]
subjFolders = [('_fwhm_id_%s%s' % (f, sub), '%s/fwhm-%s' % (sub, f))
               for f in fwhm
               for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions

Specify Workflow

Create a workflow and connect the interface nodes and the I/O stream to each other.


In [ ]:
# Initiation of the 1st-level analysis workflow
l1analysis = Workflow(name='l1analysis')
l1analysis.base_dir = opj(experiment_dir, working_dir)

# Connect up the 1st-level analysis components
l1analysis.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
                                               ('fwhm_id', 'fwhm_id')]),
                    (infosource, getsubjectinfo, [('subject_id',
                                                   'subject_id')]),
                    (getsubjectinfo, modelspec, [('subject_info',
                                                  'subject_info')]),
                    (infosource, level1conest, [('contrasts', 'contrasts')]),
                    (selectfiles, modelspec, [('func', 'functional_runs')]),
                    (selectfiles, modelspec, [('mc_param', 'realignment_parameters'),
                                              ('outliers', 'outlier_files')]),
                    (modelspec, level1design, [('session_info',
                                                'session_info')]),
                    (level1design, level1estimate, [('spm_mat_file',
                                                     'spm_mat_file')]),
                    (level1estimate, level1conest, [('spm_mat_file',
                                                     'spm_mat_file'),
                                                    ('beta_images',
                                                     'beta_images'),
                                                    ('residual_image',
                                                     'residual_image')]),
                    (level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
                                              ('spmT_images', '1stLevel.@T'),
                                              ('con_images', '1stLevel.@con'),
                                              ('spmF_images', '1stLevel.@F'),
                                              ('ess_images', '1stLevel.@ess'),
                                              ]),
                    ])

Visualize the workflow

It always helps to visualize your workflow.


In [ ]:
# Create 1st-level analysis output graph
l1analysis.write_graph(graph2use='colored', format='png', simple_form=True)

# Visualize the graph
from IPython.display import Image
Image(filename=opj(l1analysis.base_dir, 'l1analysis', 'graph.dot.png'))


170903-22:18:57,824 workflow INFO:
	 Generated workflow graph: /output/workingdir/l1analysis/graph.dot.png (graph2use=colored, simple_form=True).
Out[ ]:

Run the Workflow

Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.


In [ ]:
l1analysis.run('MultiProc', plugin_args={'n_procs': 4})

Inspect output

Let's check the structure of the output folder, to see if we have everything we wanted to save. You should have nine contrast images (con_*.nii for T-contrasts and ess_*.nii for T-contrasts) and nine statistic images (spmT_*.nii and spmF_*.nii) for every subject and smoothing kernel.


In [ ]:
!tree /output/datasink/1stLevel


/output/datasink/1stLevel
├── sub-01
│   ├── fwhm-4
│   │   ├── con_0001.nii
│   │   ├── con_0002.nii
│   │   ├── con_0003.nii
│   │   ├── con_0004.nii
│   │   ├── con_0005.nii
│   │   ├── con_0006.nii
│   │   ├── con_0007.nii
│   │   ├── ess_0008.nii
│   │   ├── ess_0009.nii
│   │   ├── spmF_0008.nii
│   │   ├── spmF_0009.nii
│   │   ├── SPM.mat
│   │   ├── spmT_0001.nii
│   │   ├── spmT_0002.nii
│   │   ├── spmT_0003.nii
│   │   ├── spmT_0004.nii
│   │   ├── spmT_0005.nii
│   │   ├── spmT_0006.nii
│   │   └── spmT_0007.nii
│   └── fwhm-8
│       ├── con_0001.nii
│       ├── con_0002.nii
│       ├── con_0003.nii
│       ├── con_0004.nii
│       ├── con_0005.nii
│       ├── con_0006.nii
│       ├── con_0007.nii
│       ├── ess_0008.nii
│       ├── ess_0009.nii
│       ├── spmF_0008.nii
│       ├── spmF_0009.nii
│       ├── SPM.mat
│       ├── spmT_0001.nii
│       ├── spmT_0002.nii
│       ├── spmT_0003.nii
│       ├── spmT_0004.nii
│       ├── spmT_0005.nii
│       ├── spmT_0006.nii
│       └── spmT_0007.nii
└── sub-02...sub-10

30 directories, 380 files

Visualize results

Let's look at the contrasts of one subject that we've just computed. First, let's see what the difference of smoothing is for the contrast average


In [ ]:
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz'
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0001.nii', title='average - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-8/spmT_0001.nii', title='average - fwhm=8',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)


Out[ ]:
<nilearn.plotting.displays.YSlicer at 0x7f0ff7227898>

Now, let's look at the three contrasts Finger, Foot, Lips.


In [ ]:
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0003.nii', title='foot - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0004.nii', title='lips - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)


Out[ ]:
<nilearn.plotting.displays.YSlicer at 0x7f0fffae20b8>

We can also check three additional contrasts Finger > others, Foot > others and Lips > others.


In [ ]:
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0005.nii', title='finger - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0006.nii', title='foot - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0007.nii', title='lips - fwhm=4',
    bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1)


Out[ ]:
<nilearn.plotting.displays.YSlicer at 0x7f0fffc132b0>

Special case

There is something special with the Finger contrast in all subjects. So let's take a look at all of them.


In [ ]:
plot_stat_map(
    '/output/datasink/1stLevel/sub-01/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-01',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-01/anat/sub-01_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-02',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-03/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-03',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-03/anat/sub-03_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-04/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-04',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-04/anat/sub-04_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-05/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-05',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-05/anat/sub-05_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-06/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-06',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-06/anat/sub-06_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-07/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-07',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-07/anat/sub-07_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-08/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-08',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-08/anat/sub-08_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-09/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-09',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-09/anat/sub-09_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)
plot_stat_map(
    '/output/datasink/1stLevel/sub-10/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-10',
    bg_img='/data/ds000114/derivatives/fmriprep/sub-10/anat/sub-10_t1w_preproc.nii.gz',
    threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1)


Out[ ]:
<nilearn.plotting.displays.YSlicer at 0x7f0ffdf14dd8>

What you might see is that the hemisphere of the main cluster differs significantly between subjects. This is because all subjects were asked to use the dominant hand, either right or left. There were three subjects (sub-01, sub-06 and sub-10) that were left handed. This can be seen in the pictures above, where we find the main cluster in the left hemisphere for right handed subject and on the right hemisphere for left handed subjects.

Because of this, We will use only right handed subjects for the following anlyses.