This tutorial provides a basic overview of how to run ndmg manually within Python.
We begin by checking for dependencies,
then we set our input parameters,
then we smiply run the pipeline.
Running the pipeline is quite simple: call ndmg_dwi_pipeline.ndmg_dwi_worker with the correct arguments.
Note that, although you can run the pipeline in Python, the absolute easiest way (outside Gigantum) is to run the pipeline from the command line once all dependencies are installed using the following command:
ndmg_bids </absolute/input/dir> </absolute/output/dir>.
This will run a single session from the input directory, and output the results into your output directory.
But for now, let's look at running in Python --
Let's begin!
In [11]:
import os
import os.path as op
import glob
import shutil
import warnings
import subprocess
from pathlib import Path
from ndmg.scripts import ndmg_dwi_pipeline
from ndmg.scripts.ndmg_bids import get_atlas
from ndmg.utils import cloud_utils
The below code is a simple check that makes sure AFNI and FSL are installed.
We also set the input, data, and atlas paths.
In [12]:
# FSL
try:
print(f"Your fsl directory is located here: {os.environ['FSLDIR']}")
except KeyError:
raise AssertionError("You do not have FSL installed! See installation instructions here: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation")
# AFNI
try:
print(f"Your AFNI directory is located here: {subprocess.check_output('which afni', shell=True, universal_newlines=True)}")
except subprocess.CalledProcessError:
raise AssertionError("You do not have AFNI installed! See installation instructions here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/main_toc.html")
Here, you set:
In [21]:
# get atlases
ndmg_dir = Path.home() / ".ndmg"
atlas_dir = ndmg_dir / "ndmg_atlases"
get_atlas(str(atlas_dir), "2mm")
# These
input_dir = ndmg_dir / "input"
out_dir = ndmg_dir / "output"
print(f"Your input and output directory will be : {input_dir} and {out_dir}")
assert op.exists(input_dir), f"You must have an input directory with data. Your input directory is located here: {input_dir}"
Here, we define input variables to the pipeline.
To run the ndmg pipeline, you need four files:
t1w - this is a high-resolution anatomical image.dwi - the diffusion image.The naming convention is in the BIDs spec.
In [23]:
# Specify base directory and paths to input files (dwi, bvecs, bvals, and t1w required)
subject_id = 'sub-0025864'
# Define the location of our input files.
t1w = str(input_dir / f"{subject_id}/ses-1/anat/{subject_id}_ses-1_T1w.nii.gz")
dwi = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.nii.gz")
bvecs = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bvec")
bvals = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bval")
print(f"Your anatomical image location: {t1w}")
print(f"Your dwi image location: {dwi}")
print(f"Your bvector location: {bvecs}")
print(f"Your bvalue location: {bvals}")
Here, we choose the parameters to run the pipeline with. If you are inexperienced with diffusion MRI theory, feel free to just use the default parameters.
In [24]:
# Use the default parameters.
atlas = 'desikan'
mod_type = 'prob'
track_type = 'local'
mod_func = 'csd'
reg_style = 'native'
vox_size = '2mm'
seeds = 1
In [25]:
# Auto-set paths to neuroparc files
mask = str(atlas_dir / "atlases/mask/MNI152NLin6_res-2x2x2_T1w_descr-brainmask.nii.gz")
labels = [str(i) for i in (atlas_dir / "atlases/label/Human/").glob(f"*{atlas}*2x2x2.nii.gz")]
print(f"mask location : {mask}")
print(f"atlas location : {labels}")
In [28]:
ndmg_dwi_pipeline.ndmg_dwi_worker(dwi=dwi, bvals=bvals, bvecs=bvecs, t1w=t1w, atlas=atlas, mask=mask, labels=labels, outdir=str(out_dir), vox_size=vox_size, mod_type=mod_type, track_type=track_type, mod_func=mod_func, seeds=seeds, reg_style=reg_style, clean=False, skipeddy=True, skipreg=True)
ndmg runs best as a standalone program on the command line.
The simplest form of the command, given that you have input data, pass an output folder, and have all dependencies installed, is the following:
ndmg_bids </absolute/input/dir> </absolute/output/dir>
Here, we'll show you how to set this up yourself.
This will run the first session from your input dataset, and put the results into the output dataset.
We still recommend the --atlas flag so that graphs don't get generated on all possible atlases.
ndmg_bids --atlas desikan </absolute/input/dir> </absolute/output/dir>
You can use:
--sp flag to set the registration space;--mf flag to set the diffusion model; and--mod flag to set deterministic / probablistic tracking;ndmg_bids --atlas desikan --sp <space> --mf <model> --mod <tracking style> </absolute/input/dir> </absolute/output/dir>
If you're having problems installing the program locally, it's often easier to use Docker.
docker pull neurodata/ndmg_dev:latestOnce you've downloaded the docker image, you can:
Attach your local input and output folders with -v,
Run the image,
and input your participant and session labels into the container.
docker run -ti --rm --privileged -e DISPLAY=$DISPLAY -v <absolute/path/to/input/data>:/input -v <absolute/path/to/output/data>:/outputs neurodata/ndmg_dev:latest --participant_label <label> --session_label <number> --atlas desikan /input /output
You can also enter the container yourself and then run ndmg from inside the container.
docker run -ti --rm --privileged --entrypoint /bin/bash -e DISPLAY=$DISPLAY -v <absolute/path/to/input/data>/input -v <absolute/path/to/output/data>/output ndmg_dev:latest
ndmg_bids --participant_label <label> --session_label <number> --atlas desikan /input /output