This notebook serves as a template for using python to generate and run a list of commands. To use, follow these instructions:
1) select File -> Make a Copy...
from the toolbar above to copy this notebook and provide a new name describing the method(s) that you are testing.
2) Modify file paths in cell 2 of Environment preparation to match the directory structure on your system.
3) Select the datasets you wish to test under Preparing data set sweep; choose from the list of datasets included in tax-credit
, or add your own.
4) Prepare methods and command template. Enter your method / parameter combinations as a dictionary to method_parameters_combinations
in cell 1, then provide a command_template
in cell 2. This notebook example assumes that the method commands are passed to the command line, but the command list generated by parameter_sweep()
can also be directed to the python interpreter, as shown in this example. Check command list in cell 3, and set number of jobs and joblib
parameters in cell 4.
5) Run all cells and hold onto your hat.
For an example of how to test classification methods in this notebook, see taxonomy assignment with Qiime 1.
In [61]:
from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
In [62]:
project_dir = expandvars("$HOME/Desktop/projects/tax-credit")
analysis_name= "mock-community"
data_dir = join(project_dir, "data", analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/projects/tax-credit/data/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
In [63]:
dataset_reference_combinations = [
('mock-1', 'gg_13_8_otus'), # formerly S16S-1
('mock-2', 'gg_13_8_otus'), # formerly S16S-2
('mock-3', 'gg_13_8_otus'), # formerly Broad-1
('mock-4', 'gg_13_8_otus'), # formerly Broad-2
('mock-5', 'gg_13_8_otus'), # formerly Broad-3
('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1
('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2
('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3
('mock-9', 'unite_20.11.2016'), # formerly ITS1
('mock-10', 'unite_20.11.2016'), # formerly ITS2-SAG
('mock-12', 'gg_13_8_otus'), # Extreme
('mock-13', 'gg_13_8_otus_full16S'), # kozich-1
('mock-14', 'gg_13_8_otus_full16S'), # kozich-2
('mock-15', 'gg_13_8_otus_full16S'), # kozich-3
('mock-16', 'gg_13_8_otus'), # schirmer-1
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim250.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))}
In [69]:
method_parameters_combinations = {
'awesome-method-number-1': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
}
Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep()
.
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = reference sequences
{3} = reference taxonomy
{4} = method name
{5} = other parameters
In [70]:
command_template = "command_line_assignment -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.fna', output_name='rep_seqs_tax_assignments.txt')
As a sanity check, we can look at the first command that was generated and the number of commands generated.
In [75]:
print(len(commands))
commands[0]
Out[75]:
Finally, we run our commands.
In [76]:
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
Out[76]:
In [77]:
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
Uncomment and run when (and if) you want to move your new results to the tax-credit
directory. Note that results needn't be in tax-credit
to compare using the evaluation notebooks.
In [78]:
# precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
# method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
# move_results_to_repository(method_dirs, precomputed_results_dir)
In [ ]: