In [64]:
import sys
import os.path as osp
import os
import subprocess
import json
sys.path.append('/home/grg/git/alfa')
os.chdir('/home/grg/git/alfa')
def shell_execute(cmd):
proc = subprocess.Popen(cmd.split(' '), stdout=subprocess.PIPE, shell=False)
(out, err) = proc.communicate()
return out
Let's say I need to type these commands :
ANTS 3 -m CC[ /home/grg/data/ALFA_DWI/10013/T1/10013_mabonlm_nobias_spm_c2_d2.nii.gz, /home/grg/data/ALFA_DWI/10013/DWI/10013_B0_brain_seg_1.nii.gz, 1, 4] -r Gauss[0,3] -t Elast[1.5] -i 30x20x10 -o /home/grg/data/ALFA_DWI/10013/DWI/ants_elast_dwi-to-t1_.nii.gz WarpImageMultiTransform 3 /home/grg/data/ALFA_DWI/10013/DWI/10013_MD.nii.gz /home/grg/data/ALFA_DWI/10013/DWI/10013_MD_t1space.nii.gz -R /home/grg/data/ALFA_DWI/10013/T1/10013_mabonlm_nobias.nii /home/grg/data/ALFA_DWI/10013/DWI/ants_elast_dwi-to-t1_Warp.nii.gz /home/grg/data/ALFA_DWI/10013/DWI/ants_elast_dwi-to-t1_Affine.txt
I'll first create a JSON giving a template of the commands.
In [50]:
j = {'ants_dwi_to_t1': u'ANTS 3 -m CC[ %s, %s, 1, 4] -r Gauss[0,3] -t Elast[1.5] -i 30x20x10 -o %s',
'warp_md_to_t1': u'WarpImageMultiTransform 3 %s %s -R %s %s %s'}
json.dump(j, open('/tmp/templates.json','w'))
The $\%s$ represent parameters (such as filenames) that will be provided later.
This is like separating bones (the command itself) from the meat (the data).
$thesaurus$ will then fill in the ($\%s$) blanks. Just name the desired command and give the parameters in the same order as they appear in the command.
In [ ]:
In [58]:
cmd = './thesaurus.py -n --name ants_dwi_to_t1 /tmp/templates.json target.nii moving.nii trans_'
print '$', cmd
print shell_execute(cmd)
cmd = './thesaurus.py -n --name warp_md_to_t1 /tmp/templates.json target.nii moving.nii output.nii trans_Warp.nii trans_Affine.txt'
print '\n$', cmd
print shell_execute(cmd)
The $-n$ option is just a safety lock to display the command only and not to run it.
$thesaurus$ can work with structured databases and preexisting filetypes. Here let's define the various IO of the previous commands.
$@$ indicate input items that must exist prior to running the command. $>$ indicate output items that will be generated by the command.
Naturally all the different filetypes have to be predefined and recognized by the database system.
In [65]:
j = {'ants_dwi_to_t1': [u'@ALFA Denoised Nobias SPM Dilated White matter',
u'@ALFA DWI B0 Brain FSL FAST White matter',
u'ALFA ANTS Elast DWI to T1 Transformation Template Filename'],
'warp_md_to_t1': [u'@ALFA Mean Diffusivity Image',
u'>ALFA DWI MD Map Warped to T1 space',
u'@ALFA Denoised Nobias T1 Image',
u'@ALFA ANTS Elast DWI to T1 Transformation',
u'@ALFA ANTS Elast DWI to T1 Affine Transformation']}
json.dump(j, open('/tmp/templates_io.json', 'w'))
Then call the desired command by its keyname, with the desired subject and feel light...
In [68]:
import thesaurus as t
a = t.ALFAHelper() #jsonfile='/tmp/templates_io.json')
a.parse_command(subject='10013', name='warp_md_to_t1')
Out[68]:
Based on this file it can also recreate the sequence of a pipeline and that provide you with the progression state of a subject along this pipeline.
In [77]:
step1 = a.current_stage('10013')
print 'step:', step1, '\n'
step2 = a.current_stage('55855')
print 'step:', step2
OK now enough talking. Can you please run that d@£( step over my entire dataset ?
In [86]:
d = '/home/grg/data/ALFA_DWI/'
from glob import glob
subjects = [e.split('/')[-1] for e in glob(osp.join(d, '*')) if osp.isdir(e)]
In [89]:
a.generate_batch(subjects, step='ants_dwi_to_t1', batchfile='/tmp/batch.sh', create_workflow=True)
Out[89]:
In [92]:
b = open('/tmp/batch.sh').readlines()
for each in b[:20]:
print each
In [ ]: