http://www.analyticalultracentrifugation.com/dynamic_density_gradients.htm
Meselson et al. - 1957 - Equilibrium Sedimentation of Macromolecules in Den Vinograd et al. - 1963 - Band-Centrifugation of Macromolecules and Viruses
http://onlinelibrary.wiley.com.proxy.library.cornell.edu/doi/10.1002/bip.360101011/pdf
Vinograd et al., 1963; (band-centrifugation):
\begin{align} \sigma^2 = \frac{r_0}{r_0^0} \left\{ \frac{r_0}{r_0^0} + 2D \left( t - t^0 \right) \right\} \end{align}isoconcentration point
\begin{equation} r_c = \sqrt{(r_t^2 + r_t * r_b + r_b^2)/3} \end{equation}rp in relation to the particle's buoyant density (assuming equilibrium?)
\begin{equation} r_p = \sqrt{ ((p_p-p_m)*2*\frac{\beta^{\circ}}{w}) + r_c^2 } \\ p_p = \textrm{buoyant density} \end{equation}Maybe this should be drawn from a uniform distribution (particules distributed evenly across the gradient)???
buoyant density of a DNA fragment in CsCl
\begin{equation} p_p = 0.098F + 1.66, \quad \textrm{where F = G+C molar fraction} \end{equation}calculating gradient density at specific radius (to calculate p_m)
??
info needed on a DNA fragment to determine it's sigma of the Guassian distribution
In [1]:
%pylab inline
In [2]:
import scipy as sp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import mixture
#import sklearn.mixture as mixture
In [72]:
n_frags = 10000
frag_GC = np.random.normal(0.5,0.1,n_frags)
frag_GC[frag_GC < 0] = 0
frag_GC[frag_GC > 1] = 1
frag_len = np.random.normal(10000,1000,n_frags)
In [73]:
ret = plt.hist2d(frag_GC, frag_len, bins=100)
In [89]:
RPM = 55000
omega = (2 * np.pi * RPM) / 60
beta_o = 1.14 * 10**9
radius_bottom = 4.85
radius_top = 2.6
col_len = radius_bottom - radius_top
density_medium = 1.7
In [75]:
# BD from GC
frag_BD = 0.098 * frag_GC + 1.66
ret = plt.hist(frag_BD, bins=100)
In [84]:
sedimentation = (frag_len*666)**0.479 * 0.00834 + 2.8 # l = length of fragment
ret = plt.hist(sedimentation, bins=100)
In [86]:
# sedimentation as a function of fragment length
len_range = np.arange(1,10000, 100)
ret = plt.scatter(len_range, 2.8 + 0.00834 * (len_range*666)**0.479 )
In [113]:
# isoconcentration point
iso_point = sqrt((radius_top**2 + radius_top * radius_bottom + radius_bottom**2)/3)
iso_point
Out[113]:
In [101]:
# radius of particle
#radius_particle = np.sqrt( (frag_BD - density_medium)*2*(beta_o/omega) + iso_point**2 )
#ret = plt.hist(radius_particle)
Out[101]:
In [ ]:
In [95]:
n_dists = 10
n_samp = 10000
In [96]:
def make_mm(n_dists):
dist_loc = np.random.uniform(0,1,n_dists)
dist_scale = np.random.uniform(0,0.1, n_dists)
dists = [mixture.NormalDistribution(x,y) for x,y in zip(dist_loc, dist_scale)]
eq_weights = np.array([1.0 / n_dists] * n_dists)
eq_weights[0] += 1.0 - np.sum(eq_weights)
return mixture.MixtureModel(n_dists, eq_weights, dists)
In [97]:
mm = make_mm(n_dists)
In [98]:
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
In [99]:
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
In [100]:
n_dists = 1000
mm = make_mm(n_dists)
In [101]:
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
In [102]:
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
In [103]:
n_dists = 10000
mm = make_mm(n_dists)
In [104]:
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
In [105]:
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
In [106]:
n_samp = 100000
In [ ]:
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
In [ ]:
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
Notes:
In [ ]:
x = np.random.normal(3, 1, 100)
y = np.random.normal(1, 1, 100)
H, xedges, yedges = np.histogram2d(y, x, bins=100)
In [ ]:
H
For each genome in mock community, simulate N fragments and calculate their Guassian distributions in the gradient. Create a mixture model of those Guassian distributions to sample Aa fragments, where Aa = the absolute abundance of the taxon in the mock community. One mixture model per genome.
In [ ]: