abba-baba
admixture testsThe ipyrad.analysis
Python module includes functions to calculate abba-baba admixture statistics (including several variants of these measures), to perform signifance tests, and to produce plots of results. All code in this notebook is written in Python, which you can copy/paste into an IPython terminal to execute, or, preferably, run in a Jupyter notebook like this one. See the other analysis cookbooks for instructions on using Jupyter notebooks. All of the software required for this tutorial is included with ipyrad
(v.6.12+). Finally, we've written functions to generate plots for summarizing and interpreting results.
In [1]:
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the ipyparallel
library (see our ipyparallel tutorial). An ipcluster
instance must be started for you to connect to, which can be started by running 'ipcluster start'
in a terminal.
In [5]:
ipyclient = ipp.Client()
len(ipyclient)
Out[5]:
In [26]:
locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci"
newick = "./analysis-raxml/RAxML_bestTree.pedic"
In [28]:
## parse the newick tree, re-root it, and plot it.
tre = toytree.tree(newick=newick)
tre.root(wildcard="prz")
tre.draw(node_labels=True, node_size=10);
## store rooted tree back into a newick string.
newick = tre.tree.write()
To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a 'baba'
analysis object that is linked to our data file, in this example we name the variable bb. Then we tell it which tests to perform, here by automatically generating a number of tests using the generate_tests_from_tree()
function. And finally, we calculate the results and plot them.
In [29]:
## create a baba object linked to a data file and newick tree
bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno"],
})
## run all tests linked to bb
bb.run(ipyclient)
In [31]:
## save the results table to a csv file
bb.results_table.to_csv("bb.abba-baba.csv", sep="\t")
## show the results table in notebook
bb.results_table
Out[31]:
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics
(described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (test 33). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 14, 20, 29, 32).
In [32]:
## plot the results, showing here some plotting options.
bb.plot(height=900,
width=600,
pct_tree_y=0.1,
ewidth=2,
alpha=4.,
style_test_labels={"font-size":"10px"},
);
baba
objectThe fundamental object for running abba-baba tests is the ipa.baba()
object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci'
file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus
=4), to maximize the amount of data available for any test. Once an initial baba
object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
In [11]:
## create an initial object linked to your data in 'locifile'
aa = ipa.baba(data=locifile)
## create two other copies
bb = aa.copy()
cc = aa.copy()
## print these objects
print aa
print bb
print cc
The next thing we need to do is to link a 'test'
to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba
object named 'cc'
below we enter two tests using a list to show how multiple tests can be linked to a single baba
object.
In [12]:
aa.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["29154_superba"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
bb.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["30686_cyathophylla"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
cc.tests = [
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41954_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41478_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
]
Each baba
object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov'
parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2
. However, for the test above setting mincov=2
would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov
parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba
object 'bb'
.
In [13]:
## print params for object aa
aa.params
Out[13]:
In [14]:
## set the mincov value as a dictionary for object bb
bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1}
bb.params
Out[14]:
When you execute the 'run()'
command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient
object. The results of the tests will be stored in your baba
object under the attributes 'results_table'
and 'results_boots'
.
In [15]:
## run tests for each of our objects
aa.run(ipyclient)
bb.run(ipyclient)
cc.run(ipyclient)
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table
, which can be easily accessed and manipulated. The tests are listed in order and can be reference by their 'index'
(the number in the left-most column). For example, below we see the results for object 'cc'
tests 0 and 1. You can see which taxa were used in each test by accessing them as 'cc.tests[0]'
or 'cc.tests[1]'
. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
In [17]:
## you can sort the results by Z-score
cc.results_table.sort_values(by="Z", ascending=False)
## save the table to a file
cc.results_table.to_csv("cc.abba-baba.csv")
## show the results in notebook
cc.results_table
Out[17]:
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
In [18]:
## create a new 'copy' of your baba object and attach a treefile
dd = bb.copy()
dd.newick = newick
## generate all possible tests
dd.generate_tests_from_tree()
## a dict of constraints
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["40578_rex", "35855_rex"],
}
## generate tests with contraints
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=False,
)
## 'exact' contrainst are even more constrained
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=True,
)
In [19]:
## run the dd tests
dd.run(ipyclient)
dd.plot(height=500, pct_tree_y=0.2, alpha=4., tree_style='c');
dd.results_table
Out[19]:
The default (required) input data file is the .loci
file produced by ipyrad
. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
In [20]:
## path to a locifile created by ipyrad
locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci"
## path to an unrooted tree inferred with tetrad
newick = "./analysis-tetrad/tutorial.tree"
You can see in the results_table
below that the D-statistic ranges between 0.2 and 0.4 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above).
In [17]:
print cc.results_table