ipyrad-analysis toolkit: abba-baba

The baba tool can be used to measure abba-baba statistics across many different hypotheses on a tree, to easily group individuals into populations for measuring abba-baba using allele frequencies, and to summarize or plot the results of many analyses.

Load packages


In [1]:
# conda install ipyrad ipcoal -c conda-forge -c bioconda

In [2]:
import ipyrad.analysis as ipa
import ipcoal
import toytree
import toyplot

In [3]:
print(ipa.__version__)
print(ipcoal.__version__)
print(toyplot.__version__)
print(toytree.__version__)


0.9.54
0.1.4
0.20.0-dev
2.0.1

Simulate linked SNP data from this tree/network

Here we simulate data on a tree/network that includes an introgressive edge. To approximate RAD-seq data we will also make ~50% of the data missing to approximate dropout from mutation disruption or insufficient sequencing coverage.


In [4]:
# generate a balance tree
tree = toytree.rtree.baltree(ntips=10, treeheight=10e6)

# draw the tree w/ an admixture edge
tree.draw(ts='p', admixture_edges=(2, 3));


0123456789101112131415161718r0r1r2r3r4r5r6r7r8r90500000010000000

In [5]:
# create a simulation model for this tree/network: (src, dest, time-prop., admix-prop.)
model = ipcoal.Model(tree=tree, nsamples=2, Ne=4e5, admixture_edges=(2, 3, 0.5, 0.15))

# simulate N loci
model.sim_loci(nloci=3000, nsites=50)

# drop 50% as missing
model.apply_missing_mask(0.5)

# write result to a database file
model.write_snps_to_hdf5(name="test-baba-miss50", outdir="/tmp", diploid=True)


wrote 64658 SNPs to /tmp/test-baba-miss50.snps.hdf5

Input data for BABA tests

We need a SNPs database and we need to know the sample names that in that database. As we will show, one easy way to generate tests for a dataset is to use a tree that has the sample labels on the tips (like we created above.) This is optional though, you can also simply write out the tests by hand.


In [6]:
# init a baba tool from your SNPs database
tool = ipa.baba2("/tmp/test-baba-miss50.snps.hdf5")

The format of test hypotheses

The simplest way to describe a test is using a dictionary with keys labeled as "p1", "p2", "p3", "p4", and the values listing the tip names that will be used to represent a population/species. When multiple samples are listed the pooled allele frequency will be used.


In [7]:
IMAP1 = {
    "p4": ["r7", "r6", "r5"],
    "p3": ["r3"],
    "p2": ["r2"],
    "p1": ["r0", "r1"],
}

IMAP2 = {
    "p4": ["r7", "r6", "r5"],
    "p3": ["r3"],
    "p2": ["r1"],
    "p1": ["r0"],
}

A simple tests

Here we will run the first tests (IMAP1) which includes a scenario where we expect to see evidence of admixture since our simulation included gene flow from P3 to P2 backward in time. It is good to start with simple tests before proceeding to run many tests in parallel. The output from a single test is verbose about how SNP filtering is applied, and how many total loci, SNPs, and discordant SNPs are present.

Running an ABBA-BABA test requires that there is data for at least one sample in each population. But when you have multiple samples per population you can also apply further filtering using a minmap argument. For example, a minmap with values set to 0.75 would require that data is present for 75% of samples in a population for the SNP to be included in analyses. The verbose output below will print how many SNPs are filtered by the minmap and other filters.


In [8]:
# run a single test 
tool.run_test(IMAP1)


Samples: 7
Sites before filtering: 64658
Filtered (indels): 0
Filtered (bi-allel): 8219
Filtered (mincov): 3632
Filtered (minmap): 29832
Filtered (subsample invariant): 15136
Filtered (combined): 41811
Sites after filtering: 22847
Sites containing missing values: 16167 (70.76%)
Missing values in SNP matrix: 23695 (14.82%)
Out[8]:
D bootstd Z ABBA BABA nSNPs nloci
0 0.551 0.029 18.908 602.667 174.333 22847 1530

In [9]:
# run the same test with different filtering applied
tool.run_test(imap=IMAP1, minmap={i: 0.5 for i in IMAP1})


Samples: 7
Sites before filtering: 64658
Filtered (indels): 0
Filtered (bi-allel): 8219
Filtered (mincov): 3632
Filtered (minmap): 34288
Filtered (subsample invariant): 15136
Filtered (combined): 44692
Sites after filtering: 19966
Sites containing missing values: 13286 (66.54%)
Missing values in SNP matrix: 16829 (12.04%)
Out[9]:
D bootstd Z ABBA BABA nSNPs nloci
0 0.57 0.036 15.792 510.417 139.708 19966 1321

Running multiple tests (simple)

You may wish to run many tests and collect the results in a table. This can be automated and run in parallel using the .run() argument. The tests are submitted as a list of dictionaries.


In [10]:
# enter multiple tests (imaps) and multiple minmaps to run in parallel
tool.run(imaps=[IMAP1, IMAP2])


[####################] 100% 0:02:08 | abba-baba tests 

In [11]:
# show the results
tool.results_table


Out[11]:
D bootstd Z ABBA BABA nSNPs nloci
0 0.551 0.032 17.312 602.667 174.333 22847 1530
1 -0.081 0.069 1.164 78.500 92.250 17315 1266

Running multiple tests (automated)

Using a tree you can conveniently describe a large set of hypotheses to test that are all compatible with the phylogenetic topology. The list of tests can be further added to by hand if you want to also include tests that are incompatible with the phylogeny. Multiple sample names can be selected by entering the idx node label from the tree drawing (see below).


In [12]:
tree.draw(ts='p', admixture_edges=(2, 3));


0123456789101112131415161718r0r1r2r3r4r5r6r7r8r90500000010000000

The function .generate_tests_from_tree() takes the tree as an argument in addition to a constraint_dict and a constraint_exact argument. This tool will then generate a list of all tests that are compatible with the tree given the constraints put on it. For example, (None, None, None, 13) would only put the constraint that the "p4" taxon must descend from node 13. The constraint_exact=1 on node 13 means that tests must include all descendants of node 13. By contrast, "p3" is descendant of node 10 but has constraint_exact=0 meaning it can be node 0, 1, 2, 3, 8, 9 or 10. Still the "p3" options will be constrained by the topology such that the sister lineage must include at least two tips. This leads to 10 possible tests below, shown by their idx node labels in the order p1-4.


In [13]:
# generate a list with tests from several constraints
tests1 = tool.generate_tests_from_tree(
    tree=tree, 
    constraint_dict=(None, None, 13, 17), 
    constraint_exact=(0, 0, 0, 1),
)


12 tests generated from tree

In [14]:
# generate a list with tests from several constraints
tests2 = tool.generate_tests_from_tree(
    tree=tree, 
    constraint_dict=(None, None, 12, 17), 
    constraint_exact=(0, 0, 0, 1),
)


6 tests generated from tree

In [15]:
tests = tests1 + tests2

In [17]:
# # add more tests manually (tip: get tips desc from node idx using .get_tip_labels())
# tests.append({
#     'p1': tree.get_tip_labels(0),
#     'p2': ['r1', 'r3'], 
#     'p3': tree.get_tip_labels(4), 
#     'p4': tree.get_tip_labels(13), 
# })

In [18]:
# pass a list with tests to .run() and see results in .results_table
tool.run(tests)


[####################] 100% 0:14:16 | abba-baba tests 

Interpreting results

The results can be viewed as a pandas DataFrame in (.results_table) and saved to a CSV file. The most relevant statistic in the Z score, which represents the number of standard deviations (measured by bootstrap resampling) that D deviates from zero. Values greater than 3 are generally considered significant, but you can lookup Z for given alpha significance level, and you should consider that there are many tests as well when deciding on significance.

For ease of viewing (maybe) the taxa that are involved in each test are stored in a separate data frame (.taxon_table). You can concatenate these two dataframes together before saving if you wish, as you could with any pandas dataframes.


In [42]:
# the results of tests run with .run are summarized in .results_table
tool.results_table


Out[42]:
D bootstd Z ABBA BABA nSNPs nloci
0 0.532 0.035 15.259 518.629 158.242 27262 1557
1 0.474 0.035 13.438 454.096 162.217 28247 1573
2 0.447 0.034 13.163 540.958 206.517 35954 1954
3 0.457 0.038 12.076 441.671 164.475 28102 1569
4 0.015 0.078 0.195 68.433 66.392 20769 1254
5 0.044 0.068 0.644 88.367 80.958 27605 1580
6 0.363 0.046 7.874 288.458 134.667 21352 1253
7 0.556 0.040 13.950 440.758 125.608 21492 1260
8 0.536 0.039 13.600 412.833 124.633 21259 1250
9 0.340 0.041 8.299 346.242 170.475 27281 1554
10 0.043 0.079 0.539 71.050 65.250 20626 1252
11 0.350 0.044 7.969 282.475 136.083 21208 1242
12 -0.367 0.048 7.666 131.783 284.425 21085 1239
13 0.077 0.056 1.361 149.600 128.300 21171 1237
14 0.031 0.049 0.638 171.700 161.333 27041 1539
15 -0.065 0.076 0.855 77.083 87.775 20187 1265
16 -0.153 0.045 3.427 171.844 233.692 29876 1621
17 0.008 0.057 0.138 137.742 135.600 21018 1228

In [45]:
# the sample names in tests that were run are also summarized in .taxon_table
tool.taxon_table


Out[45]:
p1 p2 p3 p4
0 r0,r1 r2 r3 r5,r6,r7,r8,r9
1 r0 r2 r3,r4 r5,r6,r7,r8,r9
2 r0,r1 r2 r3,r4 r5,r6,r7,r8,r9
3 r1 r2 r3,r4 r5,r6,r7,r8,r9
4 r0 r1 r3 r5,r6,r7,r8,r9
5 r0 r1 r3,r4 r5,r6,r7,r8,r9
6 r1 r2 r4 r5,r6,r7,r8,r9
7 r0 r2 r3 r5,r6,r7,r8,r9
8 r1 r2 r3 r5,r6,r7,r8,r9
9 r0,r1 r2 r4 r5,r6,r7,r8,r9
10 r0 r1 r4 r5,r6,r7,r8,r9
11 r0 r2 r4 r5,r6,r7,r8,r9
12 r3 r4 r2 r5,r6,r7,r8,r9
13 r3 r4 r0 r5,r6,r7,r8,r9
14 r3 r4 r0,r1 r5,r6,r7,r8,r9
15 r0 r1 r2 r5,r6,r7,r8,r9
16 r3 r4 r0,r1,r2 r5,r6,r7,r8,r9
17 r3 r4 r1 r5,r6,r7,r8,r9

Plotting results

The .draw() function takes a tree as input and will organize the results stored to the baba tool object into a more easily interpretable diagram. Here the p4, p3, p2, and p1 samples in each tests are indicated by different colors. The histogram on the left shows the distribution of bootstrap D-statistic values in each test and is colored to show the dominant pattern ABBA (orange) or BABA (green) if the test is significant at p=0.01 (Z > 2.5). The data used in this plot is from the .results_table and .taxon_table shown above.


In [44]:
# generate a summary drawing of test results
canvas = tool.draw(tree=tree, sort=True, width=500, height=500)


-15-10-50Z-score-0.50.00.7D-statistic17161514131211109876543210r0r1r2r3r4r5r6r7r8r9

In [18]:
# save the drawing to a PDF
import toyplot.pdf
toyplot.pdf.render(canvas, "/tmp/BABA-plot.pdf")

Non-independence of D-statistics

While ABBA-BABA tests are a powerful tool for testing hypotheses about admixture, they are seriously prone to false positives when interpreting results. This is because each 4-taxon test is non-independent of other tests that involve related taxa (Eaton et al. 2015).

This is clear in the example above, where the true introgressive pattern was "r3" into "r2" (forward in time). The results show admixture between these samples (e.g., tests 3, 5, 6, 7, 13 above), however, they also pick up a signal of introgression between "r4" and "r2" (e.g., tests 9,10, 11). This is because 'r3' shares ancestry with 'r4', and so it passed alleles into 'r2' that it shares with 'r4'. Ideally we would want to be able to distinguish whether 'r4' and 'r3' both introgressed into 'r2' independently, or whether it is just a signal of their shared ancestry. This is the goal of "partitioned D-statistics" (Eaton et al. 2013, 2015).

Running 5-taxon (partitioned) D-statistics

To run a 5-taxon partitioned D-statistic test you must define tests using a dictionary that includes two clades sampled as potential introgressive donors in the P3 clade. These are labeled p3_1 and p3_2. The partitioned test then returns three D-statistics: D12 represents introgression of alleles shared by the two P3 sublineages, D1 is alleles that uniquely introgressed from P3_1 and D2 is alleles that uniquely introgressed from P3_2. The D12 statistic is particularly powerful since it indicates the direction of introgression. If a test does not have a significant D12 then introgression from that P3 lineage is not supported. You should try testing in the opposite direction, as in the example below.


In [8]:
tool = ipa.baba2("/tmp/test-baba-miss50.snps.hdf5")

In [5]:
# testing r3 vs r4 as an introgressive donor into r2 or r0,r1
IMAP5 = {
    "p1": ["r0", "r1"],
    "p2": ["r2"],
    "p3_1": ["r3"],
    "p3_2": ["r4"],
    "p4": ['r5', 'r6', 'r7'],
}

# the reverse test of 'r2' versus 'r0,r1' as introgressive donors into r3 or r4
IMAP5_REV = {
    "p1": ["r3"],
    "p2": ["r4"],
    "p3_1": ["r0", "r1"],
    "p3_2": ["r2"],
    "p4": ['r5', 'r6', 'r7'],
}

Partitioned D-statistic results

The first test (IMAP5) has a significant Z12 with many more ABBBA patterns than BABBA, supporting P3 as an introgressive donor into P2. Of the two P3 lineages, we see that there is significant support for introgression of P3_1 into P2 (Z1) but not of P3_2 into P2 (Z2). Thus we know that introgression occurred from P3_1 and not P3_2 into P2.


In [6]:
# significant Z12 and Z1 
tool.run_partitioned_test(IMAP5, quiet=True)


Out[6]:
D12 D1 D2 boot12std boot1std boot2std Z12 Z1 Z2 ABBBA BABBA ABBAA BABAA ABABA BAABA nSNPs nloci
0 0.42 0.598 -0.088 0.053 0.053 0.098 7.863 11.217 0.893 235.792 96.417 167.833 42.167 42.875 51.125 18585 1134

The ability for partitioned D-statistics to show the direction of introgression is clearly demonstrated in the reverse test. We know that in our simulation introgression occured from r3 into r2 (forward in time), but the test below (IMAP5_REV) is asking about the reverse pattern. As we saw above, the 4-taxon tests were not able to clearly distinguish the direction of introgression. However, here we can see that there not a significant Z12 statistic, meaning that this direction of introgression is not supported. The significant Z2 statistic shows that the P3_2 taxon shares many alleles with P2 (there is admixture) but the lack of significant Z12 means it is not supported as an introgressive donor.


In [7]:
# non-significant Z12 means that the hypothesis is not supported
tool.run_partitioned_test(IMAP5_REV, quiet=True)


Out[7]:
D12 D1 D2 boot12std boot1std boot2std Z12 Z1 Z2 ABBBA BABBA ABBAA BABAA ABABA BAABA nSNPs nloci
0 -0.006 0.096 -0.593 0.071 0.097 0.055 0.078 0.988 10.806 107.292 108.5 51.125 42.167 42.875 167.833 18585 1134

Partitioned D-tests versus Dfoil...

You may have seen a paper in 2015 criticizing the partitioned D-statistics and presenting a modification to this method named D-foil. I do not agree with the criticism in this paper of the partitioned tests. The paper suffered major problems with its simulation framework (e.g., ms simulations with gene flow in the wrong direction, migration rates >50%, etc.) that made the critique of partitioned D-tests unrealistic and artificial, in my opinion.