In [1]:
# conda install ipyrad ipcoal -c conda-forge -c bioconda
In [2]:
import ipyrad.analysis as ipa
import ipcoal
import toytree
import toyplot
In [3]:
print(ipa.__version__)
print(ipcoal.__version__)
print(toyplot.__version__)
print(toytree.__version__)
In [4]:
# generate a balance tree
tree = toytree.rtree.baltree(ntips=10, treeheight=10e6)
# draw the tree w/ an admixture edge
tree.draw(ts='p', admixture_edges=(2, 3));
In [5]:
# create a simulation model for this tree/network: (src, dest, time-prop., admix-prop.)
model = ipcoal.Model(tree=tree, nsamples=2, Ne=4e5, admixture_edges=(2, 3, 0.5, 0.15))
# simulate N loci
model.sim_loci(nloci=3000, nsites=50)
# drop 50% as missing
model.apply_missing_mask(0.5)
# write result to a database file
model.write_snps_to_hdf5(name="test-baba-miss50", outdir="/tmp", diploid=True)
We need a SNPs database and we need to know the sample names that in that database. As we will show, one easy way to generate tests for a dataset is to use a tree that has the sample labels on the tips (like we created above.) This is optional though, you can also simply write out the tests by hand.
In [6]:
# init a baba tool from your SNPs database
tool = ipa.baba2("/tmp/test-baba-miss50.snps.hdf5")
In [7]:
IMAP1 = {
"p4": ["r7", "r6", "r5"],
"p3": ["r3"],
"p2": ["r2"],
"p1": ["r0", "r1"],
}
IMAP2 = {
"p4": ["r7", "r6", "r5"],
"p3": ["r3"],
"p2": ["r1"],
"p1": ["r0"],
}
Here we will run the first tests (IMAP1) which includes a scenario where we expect to see evidence of admixture since our simulation included gene flow from P3 to P2 backward in time. It is good to start with simple tests before proceeding to run many tests in parallel. The output from a single test is verbose about how SNP filtering is applied, and how many total loci, SNPs, and discordant SNPs are present.
Running an ABBA-BABA test requires that there is data for at least one sample in each population. But when you have multiple samples per population you can also apply further filtering using a minmap argument. For example, a minmap with values set to 0.75 would require that data is present for 75% of samples in a population for the SNP to be included in analyses. The verbose output below will print how many SNPs are filtered by the minmap and other filters.
In [8]:
# run a single test
tool.run_test(IMAP1)
Out[8]:
In [9]:
# run the same test with different filtering applied
tool.run_test(imap=IMAP1, minmap={i: 0.5 for i in IMAP1})
Out[9]:
In [10]:
# enter multiple tests (imaps) and multiple minmaps to run in parallel
tool.run(imaps=[IMAP1, IMAP2])
In [11]:
# show the results
tool.results_table
Out[11]:
Using a tree you can conveniently describe a large set of hypotheses to test that are all compatible with the phylogenetic topology. The list of tests can be further added to by hand if you want to also include tests that are incompatible with the phylogeny. Multiple sample names can be selected by entering the idx node label from the tree drawing (see below).
In [12]:
tree.draw(ts='p', admixture_edges=(2, 3));
The function .generate_tests_from_tree()
takes the tree as an argument in addition to a constraint_dict
and a constraint_exact
argument. This tool will then generate a list of all tests that are compatible with the tree given the constraints put on it. For example, (None, None, None, 13) would only put the constraint that the "p4" taxon must descend from node 13. The constraint_exact=1
on node 13 means that tests must include all descendants of node 13. By contrast, "p3" is descendant of node 10 but has constraint_exact=0
meaning it can be node 0, 1, 2, 3, 8, 9 or 10. Still the "p3" options will be constrained by the topology such that the sister lineage must include at least two tips. This leads to 10 possible tests below, shown by their idx node labels in the order p1-4.
In [13]:
# generate a list with tests from several constraints
tests1 = tool.generate_tests_from_tree(
tree=tree,
constraint_dict=(None, None, 13, 17),
constraint_exact=(0, 0, 0, 1),
)
In [14]:
# generate a list with tests from several constraints
tests2 = tool.generate_tests_from_tree(
tree=tree,
constraint_dict=(None, None, 12, 17),
constraint_exact=(0, 0, 0, 1),
)
In [15]:
tests = tests1 + tests2
In [17]:
# # add more tests manually (tip: get tips desc from node idx using .get_tip_labels())
# tests.append({
# 'p1': tree.get_tip_labels(0),
# 'p2': ['r1', 'r3'],
# 'p3': tree.get_tip_labels(4),
# 'p4': tree.get_tip_labels(13),
# })
In [18]:
# pass a list with tests to .run() and see results in .results_table
tool.run(tests)
The results can be viewed as a pandas DataFrame in (.results_table
) and saved to a CSV file. The most relevant statistic in the Z score, which represents the number of standard deviations (measured by bootstrap resampling) that D deviates from zero. Values greater than 3 are generally considered significant, but you can lookup Z for given alpha significance level, and you should consider that there are many tests as well when deciding on significance.
For ease of viewing (maybe) the taxa that are involved in each test are stored in a separate data frame (.taxon_table
). You can concatenate these two dataframes together before saving if you wish, as you could with any pandas dataframes.
In [42]:
# the results of tests run with .run are summarized in .results_table
tool.results_table
Out[42]:
In [45]:
# the sample names in tests that were run are also summarized in .taxon_table
tool.taxon_table
Out[45]:
The .draw()
function takes a tree as input and will organize the results stored to the baba
tool object into a more easily interpretable diagram. Here the p4, p3, p2, and p1 samples in each tests are indicated by different colors. The histogram on the left shows the distribution of bootstrap D-statistic values in each test and is colored to show the dominant pattern ABBA (orange) or BABA (green) if the test is significant at p=0.01 (Z > 2.5). The data used in this plot is from the .results_table
and .taxon_table
shown above.
In [44]:
# generate a summary drawing of test results
canvas = tool.draw(tree=tree, sort=True, width=500, height=500)
In [18]:
# save the drawing to a PDF
import toyplot.pdf
toyplot.pdf.render(canvas, "/tmp/BABA-plot.pdf")
While ABBA-BABA tests are a powerful tool for testing hypotheses about admixture, they are seriously prone to false positives when interpreting results. This is because each 4-taxon test is non-independent of other tests that involve related taxa (Eaton et al. 2015).
This is clear in the example above, where the true introgressive pattern was "r3" into "r2" (forward in time). The results show admixture between these samples (e.g., tests 3, 5, 6, 7, 13 above), however, they also pick up a signal of introgression between "r4" and "r2" (e.g., tests 9,10, 11). This is because 'r3' shares ancestry with 'r4', and so it passed alleles into 'r2' that it shares with 'r4'. Ideally we would want to be able to distinguish whether 'r4' and 'r3' both introgressed into 'r2' independently, or whether it is just a signal of their shared ancestry. This is the goal of "partitioned D-statistics" (Eaton et al. 2013, 2015).
To run a 5-taxon partitioned D-statistic test you must define tests using a dictionary that includes two clades sampled as potential introgressive donors in the P3 clade. These are labeled p3_1
and p3_2
. The partitioned test then returns three D-statistics: D12 represents introgression of alleles shared by the two P3 sublineages, D1 is alleles that uniquely introgressed from P3_1 and D2 is alleles that uniquely introgressed from P3_2. The D12 statistic is particularly powerful since it indicates the direction of introgression. If a test does not have a significant D12 then introgression from that P3 lineage is not supported. You should try testing in the opposite direction, as in the example below.
In [8]:
tool = ipa.baba2("/tmp/test-baba-miss50.snps.hdf5")
In [5]:
# testing r3 vs r4 as an introgressive donor into r2 or r0,r1
IMAP5 = {
"p1": ["r0", "r1"],
"p2": ["r2"],
"p3_1": ["r3"],
"p3_2": ["r4"],
"p4": ['r5', 'r6', 'r7'],
}
# the reverse test of 'r2' versus 'r0,r1' as introgressive donors into r3 or r4
IMAP5_REV = {
"p1": ["r3"],
"p2": ["r4"],
"p3_1": ["r0", "r1"],
"p3_2": ["r2"],
"p4": ['r5', 'r6', 'r7'],
}
The first test (IMAP5) has a significant Z12 with many more ABBBA patterns than BABBA, supporting P3 as an introgressive donor into P2. Of the two P3 lineages, we see that there is significant support for introgression of P3_1 into P2 (Z1) but not of P3_2 into P2 (Z2). Thus we know that introgression occurred from P3_1 and not P3_2 into P2.
In [6]:
# significant Z12 and Z1
tool.run_partitioned_test(IMAP5, quiet=True)
Out[6]:
The ability for partitioned D-statistics to show the direction of introgression is clearly demonstrated in the reverse test. We know that in our simulation introgression occured from r3
into r2
(forward in time), but the test below (IMAP5_REV) is asking about the reverse pattern. As we saw above, the 4-taxon tests were not able to clearly distinguish the direction of introgression. However, here we can see that there not a significant Z12 statistic, meaning that this direction of introgression is not supported. The significant Z2 statistic shows that the P3_2 taxon shares many alleles with P2 (there is admixture) but the lack of significant Z12 means it is not supported as an introgressive donor.
In [7]:
# non-significant Z12 means that the hypothesis is not supported
tool.run_partitioned_test(IMAP5_REV, quiet=True)
Out[7]:
You may have seen a paper in 2015 criticizing the partitioned D-statistics and presenting a modification to this method named D-foil. I do not agree with the criticism in this paper of the partitioned tests. The paper suffered major problems with its simulation framework (e.g., ms simulations with gene flow in the wrong direction, migration rates >50%, etc.) that made the critique of partitioned D-tests unrealistic and artificial, in my opinion.