Principal component analysis is a dimensionality reduction method used to transform and project data points onto fewer orthogonal axes that can explain the greatest amount of variance in the data. While there are many tools available to implement PCA, the ipyrad tool has many options available specifically to deal with missing data. PCA analyses are very sensitive to missing data. The ipyrad.pca
tool makes it easy to perform PCA on RAD-seq data by filtering and/or imputing missing data, and allowing for easy subsampling of individuals to include in analyses.
In [1]:
# conda install ipyrad -c bioconda
# conda install scikit-learn -c bioconda
# conda install toyplot -c eaton-lab
In [2]:
import ipyrad.analysis as ipa
import pandas as pd
import toyplot
Your input data should be a .snps.hdf
database file produced by ipyrad. If you do not have this you can generate it from any VCF file following the vcf2hdf5 tool tutorial. The database file contains the genotype calls information as well as linkage information that is used for subsampling unlinked SNPs and bootstrap resampling.
In [3]:
# the path to your .snps.hdf5 database file
data = "/home/deren/Downloads/ref_pop2.snps.hdf5"
Population assignments (imap dictionary) are optional, but can be used in a number of ways by the pca
tool. First, you can filter your data to require at least N coverage in each population. Second, you can use the frequency of genotypes within populations to impute missing data for other samples. Finally, population assignments can be used to color points when plotting your results. You can assign individual samples to populations using an imap
dictionary like below. We also create a minmap
dictionary stating that we want to require 50% coverage in each population.
In [4]:
# group individuals into populations
imap = {
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi": ["MXED8", "MXGT4", "TXGR3", "TXMD3"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
}
# require that 50% of samples have data in each group
minmap = {i: 0.5 for i in imap}
The pca
analysis object takes input data as the .snps.hdf5 file produced by ipyrad. All other parameters are optional. The imap dictionary groups individuals into populations and minmap can be used to filter SNPs to only include those that have data for at least some proportion of samples in every group. The mincov option works similarly, it filters SNPs that are shared across less than some proportion of all samples (in contrast to minmap this does not use imap groupings).
When you init the object it will load the data and apply filtering. The printed output tells you how many SNPs were removed by each filter and the remaining amount of missing data after filtering. These remaining missing values are the ones that will be filled by imputation. The options for imputing data are listed further down in this tutorial. Here we are using the "sample" method, which I generally recommend.
In [5]:
# init pca object with input data and (optional) parameter options
pca = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.75,
impute_method="sample",
)
Call .run()
and to generate the PC axes and the variance explained by each axis. The results are stored in your analysis object as dictionaries under the attributes .pcaxes
and .variances
. Feel free to take these data and plot them using any method you prefer. The code cell below shows how to save the data to a CSV file, and also to view the PC data as a table.
In [6]:
# run the PCA analysis
pca.run()
In [7]:
# store the PC axes as a dataframe
df = pd.DataFrame(pca.pcaxes[0], index=pca.names)
# write the PC axes to a CSV file
df.to_csv("pca_analysis.csv")
# show the first ten samples and the first 10 PC axes
df.iloc[:10, :10].round(2)
Out[7]:
When you call .run()
a PCA model is fit to the data and two results are generated: (1) samples weightings on the component axes; (2) the proportion of variance explained by each axis. For convenience we have developed a plotting function that can be called as .draw()
to plot these results (generated with toyplot
). The first two arguments to this function are the two axes to be plotted. By default this plotting function will use the imap
information to color points and create a legend.
In [8]:
# plot PC axes 0 and 2
pca.draw(0, 2);
By default run()
will randomly subsample one SNP per RAD locus to reduce the effect of linkage on your results. This can be turned off by setting subsample=False
, like in the plot above. When using subsampling you can set the random seed to make your results repeatable. The results here subsample 29K SNPs from a possible 228K SNPs, but the final results are quite similar to above.
In [9]:
# plot PC axes 0 and 2 with no subsampling
pca.run(subsample=False)
pca.draw(0, 2);
Subsampling unlinked SNPs is generally a good idea for PCA analyses since you want to remove the effects of linkage from your data. It also presents a convenient way to explore the confidence in your results. By using the option nreplicates
you can run many replicate analyses that subsample a different random set of unlinked SNPs each time. The replicate results are drawn with a lower opacity and the centroid of all the points for each sample is plotted as a black point. You can hover over the points with your cursor to see the sample names pop-up.
In [10]:
# plot PC axes 0 and 2 with many replicate subsamples
pca.run(nreplicates=25, seed=12345)
pca.draw(0, 2);
We offer three algorithms for imputing missing data:
The None option will almost always be a bad choice when there is any reasonable amount of missing data. Missing values will all be filled as zeros (ancestral allele) -- this is what many other PCA tools do as well. I show it here for comparison to the imputed results, which are better. The two points near the top of the plot are samples with the most missing data that are erroneously grouped together. The rest of the samples also form much less clear clusters than in the other examples where we use imputation or stricter filtering options.
In [11]:
# init pca object with input data and (optional) parameter options
pca1 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.25,
impute_method=None,
)
# run and draw results for impute_method=None
pca1.run(nreplicates=25, seed=123)
pca1.draw(0, 2);
Here I do not allow for any missing data (mincov
=1.0). You can see that this reduces the number of total SNPs from 349K to 10K. The final reslult is not too different from our first example, but seems a little less smooth. In most data sets it is probably better to include more data by imputing some values, though. Many data sets may not have as many SNPs without missing data as this one.
In [12]:
# init pca object with input data and (optional) parameter options
pca2 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=1.0,
impute_method=None,
)
# run and draw results for impute_method=None and mincov=1.0
pca2.run(nreplicates=25, seed=123)
pca.draw(0, 2);
The kmeans clustering method allows imputing values based on population allele frequencies (like the sample method) but without having to a priori assign individuals to populations. In other words, it is meant to reduce the bias introduced by assigning individuals yourself. Instead, this method uses kmeans clustering to group individuals into "populations" and then imputes values based on those population assignments. This is accomplished through iterative clustering, starting by using only SNPs that are present across 90% of all samples (this can be changed with the topcov param) and then allowing more missing data in each iteration until it reaches the mincov parameter value.
This method works great especially if you have a lot of missing data and fear that user-defined population assignments will bias your results. Here it gives super similar results to our first plots using the "sample" impute method, suggesting that our population assignments are not greatly biasing the results. To use K=7 clusters you simply enter impute_method=7
.
In [13]:
# kmeans imputation
pca3 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.5,
impute_method=7,
)
# run and draw results for kmeans clustering into 7 groups
pca3.run(nreplicates=25, seed=123)
pca3.draw(0, 2);
In [14]:
import toyplot.pdf
# save returned plot objects as variables
canvas, axes, mark = pca3.draw(0, 2)
# pass the canvas object to toyplot render function
toyplot.pdf.render(canvas, "PCA-kmeans-7.pdf")
You can view the proportion of missing data per sample by accessing the .missing
data table from your pca
analysis object. You can see that most samples in this data set had 10% missing data or less, but a few had 20-50% missing data. You can hover your cursor over the plot above to see the sample names. It seems pretty clear that samples with huge amounts of missing data do not stand out at outliers in these plots like they did in the no-imputation plot. Which is great!
In [15]:
# .missing is a pandas DataFrame
pca3.missing.sort_values(by="missing")
Out[15]:
While PCA plots are very informative, it is sometimes difficult to visualize just how well separated all of your samples are since the results are in many dimensions. A popular tool to further examine the separation of samples is t-distribution stochastic neighbor embedding (TSNE). We've implemented this in the pca
tool as well, where it first decomposes the data using pca, and then TSNE on the PC axes. The results will vary depending on the parameters and random seed, and so you cannot plot replicates runs using this method. And it is important to explore parameter values to find something that works well.
In [16]:
pca.run_tsne(subsample=True, perplexity=4.0, n_iter=100000, seed=123)
In [17]:
pca.draw();