Scientific Computing — Problem Set 3

Stefan Countryman

Question 1

1. Determine the true means, $\hat{\bar{v}}_{a}$, for $v_{1}$, $v_{2}$, ..., $v_{5}$


In [1]:
include("q1/p1.jl")


~~ QUESTION 1 ~~

Part 1
	The true means v̄̂1, v̄̂2, ..., v̄̂5 are:

	v̄̂[1] = 1.7608399300642636
	v̄̂[2] = 2.886344221395922
	v̄̂[3] = 4.01184851272758
	v̄̂[4] = 5.137116475044777
	v̄̂[5] = 6.262384437361974

2. Consider values $N$ = 1,000 and $N$ = 10,000. There are $M/N$ samples of this size in our $M$ values. Histogram the sample means for these values of $N$ and determine the true standard deviation of the means $\hat{\sigma}_{\bar{v}_{a},N}$.

Recall that $\hat{\sigma}_{\bar{v}_{a},N} = \frac{\sigma}{\sqrt{N}}$, where $\sigma$ is the variance of the overall population. So we should see that $$\bigg(\frac{\hat{\sigma}_{\bar{v}_{a},N_{1}}}{\hat{\sigma}_{\bar{v}_{a},N_{2}}}\bigg)^{2} = \frac{N_{2}}{N_{2}} = 10$$ in this case. This is roughly the result we get (see below).


In [2]:
include("q1/p2.jl")


Part 2

	N1 = 1000
	N2 = 10000

	 Standard deviations of the sample means are:

	σ̂1 = [0.24311784337397993,0.19294412355261062,0.17705504776718253,0.1942781528615023,0.24564575367828553]
	σ̂2 = [0.08172736485350657,0.06386716663997997,0.05606668407073371,0.05900974192855537,0.07424533843618523]

	We expect (σ̂1./σ̂2).^2 = N2/N1 = 10:

	(σ̂1[1]/σ̂2[1])^2 = 8.849091320213951
	(σ̂1[2]/σ̂2[2])^2 = 9.12657468656696
	(σ̂1[3]/σ̂2[3])^2 = 9.972565157927063
	(σ̂1[4]/σ̂2[4])^2 = 10.839281637664836
	(σ̂1[5]/σ̂2[5])^2 = 10.94662245831677

	...which is pretty close.

	Making histograms and placing in 'histograms' vector

In [3]:
histograms[1]


Out[3]:
Sample mean -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 -2 0 2 4 6 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 N1 = 1,000 N2 = 10,000 Legend title -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 -6.0 -5.8 -5.6 -5.4 -5.2 -5.0 -4.8 -4.6 -4.4 -4.2 -4.0 -3.8 -3.6 -3.4 -3.2 -3.0 -2.8 -2.6 -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 -1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.6 7.8 8.0 8.2 8.4 8.6 8.8 9.0 9.2 9.4 9.6 9.8 10.0 10.2 10.4 10.6 10.8 11.0 11.2 11.4 11.6 11.8 12.0 -10 0 10 20 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 Relative Frequency Sample Mean Histogram in Dataset 1

In [4]:
histograms[2]


Out[4]:
Sample mean -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.0 0 2 4 6 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 6.0 N1 = 1,000 N2 = 10,000 Legend title -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 -10 0 10 20 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 Relative Frequency Sample Mean Histogram in Dataset 2

In [5]:
histograms[3]


Out[5]:
Sample mean 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 6.0 2.20 2.25 2.30 2.35 2.40 2.45 2.50 2.55 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 3.05 3.10 3.15 3.20 3.25 3.30 3.35 3.40 3.45 3.50 3.55 3.60 3.65 3.70 3.75 3.80 3.85 3.90 3.95 4.00 4.05 4.10 4.15 4.20 4.25 4.30 4.35 4.40 4.45 4.50 4.55 4.60 4.65 4.70 4.75 4.80 4.85 4.90 4.95 5.00 5.05 5.10 5.15 5.20 5.25 5.30 5.35 5.40 5.45 5.50 5.55 5.60 5.65 5.70 5.75 5.80 5.85 2 3 4 5 6 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 N1 = 1,000 N2 = 10,000 Legend title -15 -10 -5 0 5 10 15 20 25 -10.0 -9.5 -9.0 -8.5 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 19.5 20.0 -10 0 10 20 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Relative Frequency Sample Mean Histogram in Dataset 3

In [6]:
histograms[4]


Out[6]:
Sample mean 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 3.00 3.05 3.10 3.15 3.20 3.25 3.30 3.35 3.40 3.45 3.50 3.55 3.60 3.65 3.70 3.75 3.80 3.85 3.90 3.95 4.00 4.05 4.10 4.15 4.20 4.25 4.30 4.35 4.40 4.45 4.50 4.55 4.60 4.65 4.70 4.75 4.80 4.85 4.90 4.95 5.00 5.05 5.10 5.15 5.20 5.25 5.30 5.35 5.40 5.45 5.50 5.55 5.60 5.65 5.70 5.75 5.80 5.85 5.90 5.95 6.00 6.05 6.10 6.15 6.20 6.25 6.30 6.35 6.40 6.45 6.50 6.55 6.60 6.65 6.70 6.75 6.80 6.85 6.90 6.95 7.00 7.05 7.10 7.15 7.20 7.25 7.30 7.35 7.40 7.45 7.50 0 2 4 6 8 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.0 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 7.0 7.1 7.2 7.3 7.4 7.5 N1 = 1,000 N2 = 10,000 Legend title -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 -10 0 10 20 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 Relative Frequency Sample Mean Histogram in Dataset 4

In [7]:
histograms[5]


Out[7]:
Sample mean 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.0 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 7.0 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 9.0 9.1 9.2 9.3 9.4 9.5 0 5 10 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.6 7.8 8.0 8.2 8.4 8.6 8.8 9.0 9.2 9.4 9.6 N1 = 1,000 N2 = 10,000 Legend title -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 -10 0 10 20 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 Relative Frequency Sample Mean Histogram in Dataset 5

Each histogram shows visibly smaller standard deviation for the larger sample size.

3. You can now determine the true autocorrelation function for each variable, $\hat{C}_{v_{a}, n}$, which is given by:

$$\hat{C}_{v_{a}, n} = \frac{1}{M-n} \sum_{i=1}^{M-n} \big(v_{a,i+n} - \hat{\bar{v}}_{a}\big)\big(v_{a,i} - \hat{\bar{v}}_{a}\big)$$

$n$ goes from 0 to some maximum value $n_{cut}$ with $n_{cut} << M$. Plot $\hat{C}_{v_{a},n}/\hat{C}_{v_{a},0}$ versus $n$ for $a = 1...5$.

We want to pick our bin sizes $n$ such that $\hat{C}_{v_{a},n}$ is small.


In [8]:
include("q1/p3.jl");


Part 3

	Calculating autocorrelations

	Making autocorrelation plots and placing in 'plots' vector

In [9]:
plots[1]


Out[9]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Dataset 1

In [10]:
plots[2]


Out[10]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Dataset 2

In [11]:
plots[3]


Out[11]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Dataset 3

In [12]:
plots[4]


Out[12]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Dataset 4

In [13]:
plots[5]


Out[13]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Dataset 5

3. Find the integrated autocorrelation times

$$ \hat{\tau}_{int,v_{a}} \equiv \frac{1}{2}\frac{1}{\hat{C}_{v_{a},0}} \sum_{n=-n_{cut}}^{n_{cut}} \hat{C}_{v_{a},n} = \sum_{n=0}^{n_{cut}} \frac{\hat{C}_{v_{a},n}}{\hat{C}_{v_{a},0}} - \frac{1}{2} $$

...since the zeroth term equals 1.

Estimate a value for $n_{cut}$ from your plots. $n_{cut}$ should be large enough that $\hat{C}_{v_{a},n}/\hat{C}_{v_{a},0}$ has gotten close enough to zero that the value of $\hat{\tau}_{int,v_{a}}$ is not affected by modest changes in $n_{cut}$.

Judging by the above plots, $\hat{C}_{v_{a},n}/\hat{C}_{v_{a},0}$ reaches zero at around $n = 100$ for each dataset. Since $2\tau_{int}$ is the separation between unrelated measurements, pick $n_{cut} = 300$ to be on the safe side.


In [14]:
include("q1/p4.jl")


Part 4

	Pick ncut = 200 and calculate autocorrelation times

	Autocorrelation times:

		τ̂v1:	42.02837656926044
		τ̂v2:	42.64103861292871
		τ̂v3:	39.9497824976959
		τ̂v4:	42.55317445674922
		τ̂v5:	41.84790218850918

5. Calculate the true standard deviation of the data, i.e.

$$ \hat{\sigma}_{v_{a}}^{2} \equiv \frac{1}{M-1} \sum_{i=1}^{M} (v_{a,i} - \hat{\bar{v}}_{a})^{2} $$

For a sample of size $N$, we should have

$$ \hat{\sigma}_{\bar{v}_{a},N} = \sqrt{\frac{2\hat{\tau}_{int,v_{a}}}{N}} \hat{\sigma}_{v_{a}} $$

Giving the testable hypothesis that

$$ N = 2 \hat{\tau}_{int,v_{a}} \frac{\hat{\sigma}_{v_{a}}^{2}}{\hat{\sigma}_{\bar{v}_{a},N}^{2}} $$

In [15]:
include("q1/p5.jl")


Part 5

	Calculating σ̂ for each a
		σ̂[1] = 0.8668358466567552
		σ̂[2] = 0.6795499044957977
		σ̂[3] = 0.6377090628372403
		σ̂[4] = 0.676184727622303
		σ̂[5] = 0.8618077497822448

	Recall, from Part 2:

	N1 = 1000
	N2 = 10000


	We expect:

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

	and

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

	Test that hypothesis...

	for N1:

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

		2 * τ̂[a] * ( σ̂[a]^2 / σ̂1[a]^2 ) = N1 = 1000

	...and for N2:

		2 * τ̂[a] * (  σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

		2 * τ̂[a] * (  σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

		2 * τ̂[a] * (  σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

		2 * τ̂[a] * (  σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

		2 * τ̂[a] * (  σ̂[a]^2 / σ̂2[a]^2 ) = N2 = 10000

	Which is exactly what we would expect.

6. Calculate the true covariance matrix for the data, defined by

$$ \hat{c}_{v_{a},v_{b}} = \frac{1}{M} \sum_{i=1}^{M} (v_{a,i} - \hat{\bar{v}}_{a}) (v_{b,i} - \hat{\bar{v}}_{b}) $$

It is customary to define a normalized version of $\hat{c}_{v_{a},v_{b}}$ by

$$ \hat{\rho}_{v_{a},v_{b}} \equiv \frac{\hat{c}_{v_{a},v_{b}}}{\hat{\sigma}_{v_{a}}\hat{\sigma}_{v_{b}}} $$

Some scratch work for problem 6:

We can nicely express the covariance matrix $\hat{c}$ as

$$ \hat{c} = \frac{1}{M} D^{T}D $$

or in code as

ĉ = transpose(D) * D ./ M

where

D = v - ones(M) * transpose(v̄̂)

and we can write $\hat{\rho}_{v_{a},v_{b}}$ as

  ρ̂ = ĉ ./ ( σ * transpose(σ) )

In [16]:
include("q1/p6.jl")


Part 6

	Calculating the true covariance matrix, ĉ:

	Calculating the normalized covariance matrix, ρ̂:
Out[16]:
5x5 Array{Float64,2}:
 0.999999    0.930249  0.623271  0.293947  6.93888e-5
 0.930249    0.999999  0.866737  0.593589  0.290118  
 0.623271    0.866737  0.999999  0.86551   0.618212  
 0.293947    0.593589  0.86551   0.999999  0.928775  
 6.93888e-5  0.290118  0.618212  0.928775  0.999999  

As expected, the normalized variances along the main diagonal are all 1.


In [17]:
using PyPlot
surf(ρ̂)
xlabel("va")
ylabel("vb")
title("Normalized Covariance between Datasets")


Warning: error initializing module PyPlot:
PyCall.PyError(msg=":PyImport_ImportModule", T=PyCall.PyObject(o=0x000000001b6dd698), val=PyCall.PyObject(o=0x000000001b92ffc8), traceback=PyCall.PyObject(o=0x0000000000000000))
Warning: using PyPlot.plot in module Main conflicts with an existing identifier.
pltm not defined
while loading In[17], in expression starting on line 2

 in gca at /Users/Stefan/.julia/v0.3/PyPlot/src/PyPlot.jl:367
 in plot_surface at /Users/Stefan/.julia/v0.3/PyPlot/src/PyPlot.jl:434
 in surf at /Users/Stefan/.julia/v0.3/PyPlot/src/PyPlot.jl:466

I'm having trouble getting matplotlib to render the surface correctly; it looks fine from a different angle (nicely peaked along the main diagonal), but from the default angle, those polygons are transparent.

7. Pick two groups of data from the full universe of data. One should have N = 1,000 and the other should have N = 10,000. These two groups represent results one might get from simulations. These two groups represent results one might get from simulations. We want to see how well these groups represent reqults one might get from simulations. We want to see how well these groups reproduced the true statistical results for these data.

a. Estimate the autocorrelation function $C_{v_{a},n}$ from these two groups and the integrated autocorrelation time.

b. Use these to determine the standard deviation of the mean $\sigma_{\bar{b}_{a},N}$.

c. Compare this with the results from the universe of data. Also compare the normalized covariance matrix $\rho_{v_{a},v_{b}}$ from these small samples with the universe of data.


In [18]:
include("q1/p7.jl")


Part 7

	Pick subjets v1 and v2 of each dataset,
	sizes N1 = 1,000 & N2 = 10,000.

	Estimate autocorrelations

	...for N1...

	...for N2 (See plots below)

	Placing estimated autocorrelation plots 'plots1' and 'plots2':

	Estimating autocorrelation times τ1 and τ2
ns not defined
while loading /Users/Stefan/Dropbox/1-gsas/1st-year/Scientific Computing/Assignments/set03/q1/p7.jl, in expression starting on line 52
while loading In[18], in expression starting on line 1

 in anonymous at no file:53
 in include at /Applications/Julia-0.3.7.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at /Applications/Julia-0.3.7.app/Contents/Resources/julia/lib/julia/sys.dylib

In [19]:
plots1[1]


Out[19]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 1 with Sample Size N1 = 1000

In [20]:
plots1[2]


Out[20]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 2 with Sample Size N1 = 1000

In [21]:
plots1[3]


Out[21]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 3 with Sample Size N1 = 1000

In [22]:
plots1[4]


Out[22]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 4 with Sample Size N1 = 1000

In [23]:
plots1[5]


Out[23]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 5 with Sample Size N1 = 1000

The autocorrelation takes too many steps (> 100) to settle down for N1 = 1,000.

We know from earlier that $n_{cut}=150$ should be sufficient, but just working off of sample sizes of 1,000, we aren't able to satisfy the condition that $n_{cut} << N_{1}$.


In [24]:
plots2[1]


Out[24]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 1 with Sample Size N2 = 10000

In [25]:
plots2[2]


Out[25]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 2 with Sample Size N2 = 10000

In [26]:
plots2[3]


Out[26]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 3 with Sample Size N2 = 10000

In [27]:
plots2[4]


Out[27]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 4 with Sample Size N2 = 10000

In [28]:
plots2[5]


Out[28]:
Separation -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 -400 -380 -360 -340 -320 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800 -500 0 500 1000 -400 -350 -300 -250 -200 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Normalized Autocorrelation Estimated Autocorrelation for Dataset 5 with Sample Size N2 = 10000

Let's see how good ρ1 and ρ2 were as estimates of ρ̂:


In [29]:
ρratio1 = ρ̂ ./ ρ1


ρ1 not defined
while loading In[29], in expression starting on line 1

In [30]:
ρrelerr1 = ones(5,5) - ρratio1


ρratio1 not defined
while loading In[30], in expression starting on line 1

Even for the smaller of the two sample sizes, $N1 = 1,000$, the estimate is fairly close to the true value, except for the correlations between datasets v1 and v5, which are off for both sample sizes.


In [31]:
ρratio2 = ρ̂ ./ ρ2


ρ2 not defined
while loading In[31], in expression starting on line 1

In [32]:
ρrelerr2 = ones(5,5) - ρratio2


ρratio2 not defined
while loading In[32], in expression starting on line 1

Again, the estimates are (for the most part) significantly better for the larger sample size. Let's compare the ratio of the relative errors of the two estimators:


In [33]:
ρrelerrratio = ρrelerr1 ./ ρrelerr2


ρrelerr1 not defined
while loading In[33], in expression starting on line 1

Take the geometric mean of the off-diagonal rows to see how much of a reduction in error we get, on average, in going from N1 = 1,000 to N2 = 10,000.


In [34]:
ρoffdiagratio = ρrelerrratio;
mn = 1
for i in [1:5]
    ρoffdiagratio[i,i] = 1.0;   # Set diagonals to 1
end
for r in ρoffdiagratio
    mn *= r;
end
mn = mn ^ (1/16);           # Off-diagonal elements


ρrelerrratio not defined
while loading In[34], in expression starting on line 1

While the overall picture is a little muddled (there are some correlation factors where the smaller sample size coincidentally gets a more accurate measurement), the correlation factors estimated in this particular case with the larger sample size are, on average, around 2.6 times better in terms of relative error than the smaller sample sizes.

Question 2

1. Break the $M$ measurements up into groups of size $N$, calculate $\bar{v}_{a}$ for each group and then calculate $f_{i}(\bar{b}_{a})$ for each group. Calculate these function of the data means for all $M/N$ groups and find the standard deviation for $f_{i}(\bar{v}_{a})$, $\hat{\sigma}_{f_{i},N}$.


In [35]:
include("q2/p0.jl")
include("q2/p1.jl")


~~ QUESTION 2 ~~

RUNNING QUESTION 1 SCRIPT AGAIN

~~ QUESTION 1 ~~

Part 1
	The true means v̄̂1, v̄̂2, ..., v̄̂5 are:

	v̄̂[1] = 1.7608399300642636
	v̄̂[2] = 2.886344221395922
	v̄̂[3] = 4.01184851272758
	v̄̂[4] = 5.137116475044777
	v̄̂[5] = 6.262384437361974

Part 2

	N1 = 1000
	N2 = 10000

	 Standard deviations of the sample means are:

	σ̂1 = [0.24311784337397993,0.19294412355261062,0.17705504776718253,0.1942781528615023,0.24564575367828553]
	σ̂2 = [0.08172736485350657,0.06386716663997997,0.05606668407073371,0.05900974192855537,0.07424533843618523]

	We expect (σ̂1./σ̂2).^2 = N2/N1 = 10:

	(σ̂1[1]/σ̂2[1])^2 = 8.849091320213951
	(σ̂1[2]/σ̂2[2])^2 = 9.12657468656696
	(σ̂1[3]/σ̂2[3])^2 = 9.972565157927063
	(σ̂1[4]/σ̂2[4])^2 = 10.839281637664836
	(σ̂1[5]/σ̂2[5])^2 = 10.94662245831677

	...which is pretty close.

	Making histograms and placing in 'histograms' vector
Part 1
RESUMING QUESTION 2

	Calculating functions of means f1v1, f2v1, ...

	Calculating standard deviations of the functions
Out[35]:
0.037721615637770205

In [36]:
include("q2/p2.jl")


Calculating σ̂f1, σ̂f2, σ̂f3 for sample sizes N1, N2

	Naive standard deviations for sample size 1,000
	σ̂f1N1 = 0.09358325029691549
	σ̂f2N1 = 0.08531349962730816
	σ̂f3N1 = 0.19846003433749787

	Naive standard deviations for sample size 10,000
	σ̂f1N2 = 0.031368333151644764
	σ̂f2N2 = 0.026418957974093424
	σ̂f3N2 = 0.06523332542094236

	True standard deviations for sample size 1,000
	σ̂truef1N1 = 0.04806147844102485
	σ̂truef2N1 = 0.029333752253445103
	σ̂truef3N1 = 0.11336945628023494

	True standard deviations for sample size 10,000
	σ̂truef1N2 = 0.01607397202563451
	σ̂truef2N2 = 0.009444648233277958
	σ̂truef3N2 = 0.037721615637770205

In [37]:
include("q2/p3.jl")


Part 3

	Estimating mean and standard deviation using
	jacknife resampling with bin size of 40.

	Estimated means, using N = 10000 and b = 40:

		dataset 1: 1.7405204690023046
		dataset 2: 2.8572197300584294
		dataset 3: 3.973918991114564
		dataset 4: 5.113901871965279
		dataset 5: 6.253884752816004

	Estimated standard deviations:

		dataset 1: 0.04992736300388104
		dataset 2: 0.038429589950498916
		dataset 3: 0.03473670589572516
		dataset 4: 0.039887332876986356
		dataset 5: 0.05244070766470152

	Calculating dependence of jacknife σ estimator on b.

	Generating plots in σvsbplots to show that dependence.

In [38]:
σvsbplots[1]


Out[38]:
Bin size b -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 -0.10 -0.08 -0.06 -0.04 -0.02 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 -0.1 0.0 0.1 0.2 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 Estimated Variance Estimated Variance vs. Bin Size for Dataset 1

In [39]:
σvsbplots[2]


Out[39]:
Bin size b -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 -0.07 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 -0.060 -0.058 -0.056 -0.054 -0.052 -0.050 -0.048 -0.046 -0.044 -0.042 -0.040 -0.038 -0.036 -0.034 -0.032 -0.030 -0.028 -0.026 -0.024 -0.022 -0.020 -0.018 -0.016 -0.014 -0.012 -0.010 -0.008 -0.006 -0.004 -0.002 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.022 0.024 0.026 0.028 0.030 0.032 0.034 0.036 0.038 0.040 0.042 0.044 0.046 0.048 0.050 0.052 0.054 0.056 0.058 0.060 0.062 0.064 0.066 0.068 0.070 0.072 0.074 0.076 0.078 0.080 0.082 0.084 0.086 0.088 0.090 0.092 0.094 0.096 0.098 0.100 0.102 0.104 0.106 0.108 0.110 0.112 0.114 0.116 0.118 0.120 -0.1 0.0 0.1 0.2 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 Estimated Variance Estimated Variance vs. Bin Size for Dataset 2

In [40]:
σvsbplots[3]


Out[40]:
Bin size b -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 -0.050 -0.048 -0.046 -0.044 -0.042 -0.040 -0.038 -0.036 -0.034 -0.032 -0.030 -0.028 -0.026 -0.024 -0.022 -0.020 -0.018 -0.016 -0.014 -0.012 -0.010 -0.008 -0.006 -0.004 -0.002 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.022 0.024 0.026 0.028 0.030 0.032 0.034 0.036 0.038 0.040 0.042 0.044 0.046 0.048 0.050 0.052 0.054 0.056 0.058 0.060 0.062 0.064 0.066 0.068 0.070 0.072 0.074 0.076 0.078 0.080 0.082 0.084 0.086 0.088 0.090 0.092 0.094 0.096 0.098 0.100 0.102 -0.10 -0.05 0.00 0.05 0.10 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 Estimated Variance Estimated Variance vs. Bin Size for Dataset 3

In [41]:
σvsbplots[4]


Out[41]:
Bin size b -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 -0.07 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 -0.060 -0.058 -0.056 -0.054 -0.052 -0.050 -0.048 -0.046 -0.044 -0.042 -0.040 -0.038 -0.036 -0.034 -0.032 -0.030 -0.028 -0.026 -0.024 -0.022 -0.020 -0.018 -0.016 -0.014 -0.012 -0.010 -0.008 -0.006 -0.004 -0.002 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.022 0.024 0.026 0.028 0.030 0.032 0.034 0.036 0.038 0.040 0.042 0.044 0.046 0.048 0.050 0.052 0.054 0.056 0.058 0.060 0.062 0.064 0.066 0.068 0.070 0.072 0.074 0.076 0.078 0.080 0.082 0.084 0.086 0.088 0.090 0.092 0.094 0.096 0.098 0.100 0.102 0.104 0.106 0.108 0.110 0.112 0.114 0.116 0.118 0.120 -0.1 0.0 0.1 0.2 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 Estimated Variance Estimated Variance vs. Bin Size for Dataset 4

In [42]:
σvsbplots[5]


Out[42]:
Bin size b -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 -0.10 -0.08 -0.06 -0.04 -0.02 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 -0.1 0.0 0.1 0.2 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 Estimated Variance Estimated Variance vs. Bin Size for Dataset 5

The estimated standard deviations scale with the square root of the bin size, which intuitively makes sense, given that jacknife resampling suppresses variance as bin count increases. Random fluctuations due to falling bin size become apparent beyond around b=60, but near b=40 (the rough autocorrelation time we calculated for each of our datasets), the estimated variance is stable.

4. Now calculate $f_{i}(v^{\prime}_{a,k})$ for each of the $N/b$ jacknife blocks. You can then determine $\sigma_{f_{i},N}$ from

$$ \sigma^{2}_{f_{i},N} = \frac{N/b - 1}{N/b} \sum^{N/b}_{k=1}(f_{i}(v^{\prime}_{a,k})-f_{i}(\bar{v}_{a}))^2 $$

Again, do this for a few values of b that are comparable to the integrated autocorrelation time. How does $\sigma_{f_{i},N}$ compare with $\hat{\sigma}_{f_{i},N}$ from part 1?


In [43]:
include("q2/p4.jl")


Part 4

	Estimating σ for f1..f3 with bin around b=40

	Generating plots in σfvsb to show σf dependence on b.

	Compare σfjack to naive σ̂ from part 1:

		σ̂f1N2/σfjack[4,1] = 3.0732915815122976
		σ̂f2N2/σfjack[4,2] = 4.196699645267839
		σ̂f3N2/σfjack[4,3] = 2.8213301699893942

In [44]:
σfvsb[1]


Out[44]:
Bin size b 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 31.0 31.2 31.4 31.6 31.8 32.0 32.2 32.4 32.6 32.8 33.0 33.2 33.4 33.6 33.8 34.0 34.2 34.4 34.6 34.8 35.0 35.2 35.4 35.6 35.8 36.0 36.2 36.4 36.6 36.8 37.0 37.2 37.4 37.6 37.8 38.0 38.2 38.4 38.6 38.8 39.0 39.2 39.4 39.6 39.8 40.0 40.2 40.4 40.6 40.8 41.0 41.2 41.4 41.6 41.8 42.0 42.2 42.4 42.6 42.8 43.0 43.2 43.4 43.6 43.8 44.0 44.2 44.4 44.6 44.8 45.0 45.2 45.4 45.6 45.8 46.0 46.2 46.4 46.6 46.8 47.0 47.2 47.4 47.6 47.8 48.0 48.2 48.4 48.6 48.8 49.0 30 35 40 45 50 31.0 31.5 32.0 32.5 33.0 33.5 34.0 34.5 35.0 35.5 36.0 36.5 37.0 37.5 38.0 38.5 39.0 39.5 40.0 40.5 41.0 41.5 42.0 42.5 43.0 43.5 44.0 44.5 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 0.0088 0.0090 0.0092 0.0094 0.0096 0.0098 0.0100 0.0102 0.0104 0.0106 0.0108 0.0110 0.0112 0.0114 0.0116 0.00895 0.00900 0.00905 0.00910 0.00915 0.00920 0.00925 0.00930 0.00935 0.00940 0.00945 0.00950 0.00955 0.00960 0.00965 0.00970 0.00975 0.00980 0.00985 0.00990 0.00995 0.01000 0.01005 0.01010 0.01015 0.01020 0.01025 0.01030 0.01035 0.01040 0.01045 0.01050 0.01055 0.01060 0.01065 0.01070 0.01075 0.01080 0.01085 0.01090 0.01095 0.01100 0.01105 0.01110 0.01115 0.01120 0.01125 0.01130 0.01135 0.01140 0.008 0.009 0.010 0.011 0.012 0.00895 0.00900 0.00905 0.00910 0.00915 0.00920 0.00925 0.00930 0.00935 0.00940 0.00945 0.00950 0.00955 0.00960 0.00965 0.00970 0.00975 0.00980 0.00985 0.00990 0.00995 0.01000 0.01005 0.01010 0.01015 0.01020 0.01025 0.01030 0.01035 0.01040 0.01045 0.01050 0.01055 0.01060 0.01065 0.01070 0.01075 0.01080 0.01085 0.01090 0.01095 0.01100 0.01105 0.01110 0.01115 0.01120 0.01125 0.01130 0.01135 0.01140 Estimated Variance Estimated Variance of function f1 vs. Bin Size

In [45]:
σfvsb[2]


Out[45]:
Bin size b 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 31.0 31.2 31.4 31.6 31.8 32.0 32.2 32.4 32.6 32.8 33.0 33.2 33.4 33.6 33.8 34.0 34.2 34.4 34.6 34.8 35.0 35.2 35.4 35.6 35.8 36.0 36.2 36.4 36.6 36.8 37.0 37.2 37.4 37.6 37.8 38.0 38.2 38.4 38.6 38.8 39.0 39.2 39.4 39.6 39.8 40.0 40.2 40.4 40.6 40.8 41.0 41.2 41.4 41.6 41.8 42.0 42.2 42.4 42.6 42.8 43.0 43.2 43.4 43.6 43.8 44.0 44.2 44.4 44.6 44.8 45.0 45.2 45.4 45.6 45.8 46.0 46.2 46.4 46.6 46.8 47.0 47.2 47.4 47.6 47.8 48.0 48.2 48.4 48.6 48.8 49.0 30 35 40 45 50 31.0 31.5 32.0 32.5 33.0 33.5 34.0 34.5 35.0 35.5 36.0 36.5 37.0 37.5 38.0 38.5 39.0 39.5 40.0 40.5 41.0 41.5 42.0 42.5 43.0 43.5 44.0 44.5 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 0.0054 0.0055 0.0056 0.0057 0.0058 0.0059 0.0060 0.0061 0.0062 0.0063 0.0064 0.0065 0.0066 0.0067 0.0068 0.0069 0.0070 0.0071 0.00548 0.00550 0.00552 0.00554 0.00556 0.00558 0.00560 0.00562 0.00564 0.00566 0.00568 0.00570 0.00572 0.00574 0.00576 0.00578 0.00580 0.00582 0.00584 0.00586 0.00588 0.00590 0.00592 0.00594 0.00596 0.00598 0.00600 0.00602 0.00604 0.00606 0.00608 0.00610 0.00612 0.00614 0.00616 0.00618 0.00620 0.00622 0.00624 0.00626 0.00628 0.00630 0.00632 0.00634 0.00636 0.00638 0.00640 0.00642 0.00644 0.00646 0.00648 0.00650 0.00652 0.00654 0.00656 0.00658 0.00660 0.00662 0.00664 0.00666 0.00668 0.00670 0.00672 0.00674 0.00676 0.00678 0.00680 0.00682 0.00684 0.00686 0.00688 0.00690 0.00692 0.00694 0.00696 0.00698 0.00700 0.005 0.006 0.007 0.008 0.00545 0.00550 0.00555 0.00560 0.00565 0.00570 0.00575 0.00580 0.00585 0.00590 0.00595 0.00600 0.00605 0.00610 0.00615 0.00620 0.00625 0.00630 0.00635 0.00640 0.00645 0.00650 0.00655 0.00660 0.00665 0.00670 0.00675 0.00680 0.00685 0.00690 0.00695 0.00700 0.00705 Estimated Variance Estimated Variance of function f2 vs. Bin Size

In [46]:
σfvsb[3]


Out[46]:
Bin size b 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 31.0 31.2 31.4 31.6 31.8 32.0 32.2 32.4 32.6 32.8 33.0 33.2 33.4 33.6 33.8 34.0 34.2 34.4 34.6 34.8 35.0 35.2 35.4 35.6 35.8 36.0 36.2 36.4 36.6 36.8 37.0 37.2 37.4 37.6 37.8 38.0 38.2 38.4 38.6 38.8 39.0 39.2 39.4 39.6 39.8 40.0 40.2 40.4 40.6 40.8 41.0 41.2 41.4 41.6 41.8 42.0 42.2 42.4 42.6 42.8 43.0 43.2 43.4 43.6 43.8 44.0 44.2 44.4 44.6 44.8 45.0 45.2 45.4 45.6 45.8 46.0 46.2 46.4 46.6 46.8 47.0 47.2 47.4 47.6 47.8 48.0 48.2 48.4 48.6 48.8 49.0 30 35 40 45 50 31.0 31.5 32.0 32.5 33.0 33.5 34.0 34.5 35.0 35.5 36.0 36.5 37.0 37.5 38.0 38.5 39.0 39.5 40.0 40.5 41.0 41.5 42.0 42.5 43.0 43.5 44.0 44.5 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 0.0195 0.0200 0.0205 0.0210 0.0215 0.0220 0.0225 0.0230 0.0235 0.0240 0.0245 0.0250 0.0255 0.0260 0.0265 0.0199 0.0200 0.0201 0.0202 0.0203 0.0204 0.0205 0.0206 0.0207 0.0208 0.0209 0.0210 0.0211 0.0212 0.0213 0.0214 0.0215 0.0216 0.0217 0.0218 0.0219 0.0220 0.0221 0.0222 0.0223 0.0224 0.0225 0.0226 0.0227 0.0228 0.0229 0.0230 0.0231 0.0232 0.0233 0.0234 0.0235 0.0236 0.0237 0.0238 0.0239 0.0240 0.0241 0.0242 0.0243 0.0244 0.0245 0.0246 0.0247 0.0248 0.0249 0.0250 0.0251 0.0252 0.0253 0.0254 0.0255 0.0256 0.0257 0.0258 0.0259 0.0260 0.018 0.020 0.022 0.024 0.026 0.0198 0.0200 0.0202 0.0204 0.0206 0.0208 0.0210 0.0212 0.0214 0.0216 0.0218 0.0220 0.0222 0.0224 0.0226 0.0228 0.0230 0.0232 0.0234 0.0236 0.0238 0.0240 0.0242 0.0244 0.0246 0.0248 0.0250 0.0252 0.0254 0.0256 0.0258 0.0260 Estimated Variance Estimated Variance of function f3 vs. Bin Size

The jacknife estimated standard deviations for the functions are each larger than our naive estimates. It seems that correlations between the variables were an important factor.

Question 3

Choosing two values for $N$, estimate $\tau_{int}$ for each of the $M/N$ samples of size $N$ in the universe of data, as a function of $n_{cut}$. Then find the standard deviation $\sigma_{\tau,N}$ of $\tau_{int}$. Does the standard deviation with $n_{cut} \sim N$ decrease as $N$ increases?


In [47]:
include("q3/p1.jl");


	Pick ncut = 150, as before

	Partitioning v into M/N bins for N1 = 1000 and N2 = 10000

	Counting number of bins

	Calculating τint for M/N1 bins

	Calculating στint for M/N1 bins

	Calculating τint for M/N2 bins

	Calculating στint for M/N2 bins

In [48]:
στintN1./στintN2


Out[48]:
5-element Array{Float64,1}:
 3.18221
 2.9918 
 2.79801
 2.95332
 3.05401

Indeed, the standard deviations are around 3 times higher for the smaller value of N.

Question 4

1. Make measurements of the temperature, potential energy, and the time average of the virial, which is given by

$$ \sum_{i} \sum_{j>i} r_{ij} \frac{\partial{V}_{ij}}{\partial{r}_{ij}} $$

for every molecular dynamics time step.

Read the temperature, potential energy, and virial datasets into arrays:


In [1]:
include("q4/p1.jl");


~~ QUESTION 4 ~~

Part 1

	Reading in values from MD simulation...
Out[1]:
1600x3 Array{Float64,2}:
 1.26639   -9874.31  65.8209
 1.2673    -9877.18  67.824 
 1.28584   -9932.28  59.046 
 1.27844   -9910.02  63.1065
 1.29029   -9945.58  56.6256
 1.33752  -10087.4   32.6259
 1.30571   -9991.53  50.2689
 1.28668   -9934.5   58.5206
 1.28913   -9942.1   54.3847
 1.3044    -9988.09  44.2214
 1.2871    -9936.06  52.8821
 1.30763   -9997.97  43.4861
 1.27127   -9888.03  59.4582
 ⋮                          
 1.32472  -10049.4   48.4946
 1.29285   -9953.06  60.2433
 1.32887  -10061.2   40.8293
 1.31279  -10013.1   49.8159
 1.32367  -10046.0   47.3308
 1.28674   -9934.8   65.1338
 1.32045  -10036.4   46.7739
 1.32402  -10047.2   41.3392
 1.28518   -9930.29  56.2173
 1.30285   -9982.91  47.6141
 1.32051  -10036.5   39.4782
 1.29792   -9968.48  46.3082

2. Find autocorrelation times for those three quantities.

HAVE TO FIX THAT AUTOCORRELATION CODE


In [39]:
include("q4/p2.jl")


Part 2

	First, plot autocorrelation to estimate τint

	Making autocorrelation plots and placing in 'mdautocorr' vector

	Autocorrelation times:

		τ̂vTemperature at T = 1.069:	0.04901925380373062
		τ̂vPotential Energy at T = 1.069:	0.05185929773389353
		τ̂vVirial at T = 1.069:	0.9207022092691668
		τ̂vTemperature at T = 1.304:	0.9073115737013056
		τ̂vPotential Energy at T = 1.304:	0.9053114226942518
		τ̂vVirial at T = 1.304:	0.8850134725174073

Note that these autocorrelation times should be multiplied by 10, since that's the number of actual simulation steps between each recorded measurement. Plot autocorrelations against step separation to get a sense of the autocorrelation times:


In [33]:
mdautocorr[1]


Out[33]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Temperature at T = 1.069

In [34]:
mdautocorr[2]


Out[34]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Potential Energy at T = 1.069

In [35]:
mdautocorr[3]


Out[35]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Virial at T = 1.069

In [36]:
mdautocorr[4]


Out[36]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Temperature at T = 1.304

In [37]:
mdautocorr[5]


Out[37]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Potential Energy at T = 1.304

In [38]:
mdautocorr[6]


Out[38]:
Step Separation 0 100 200 300 400 500 600 700 800 900 1000 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -2.00 -1.95 -1.90 -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 -2 0 2 4 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 Autocorrelation Autocorrelation for Virial at T = 1.304

The distance between uncorrelated steps for the virial, temperature, and potential energy at $T = 1.304$ all seem to be around 100 steps, corresponding to 10 steps in our saved data (since we only saved these quantities every 10 simulation steps). At $T = 1.069$, with a calm region for all three values at around 250 steps. Use these values in determining ncut for each quantity for the jacknife part.

3. Measure the covariance matrix for these 3 quantities.


In [44]:
include("q4/p3.jl")


Part 3

	Calculating the covariance matrix, Cmoldy:
Out[44]:
6x6 Array{Float64,2}:
  0.000267825    -0.804464    -0.143681    …     0.0248542    0.0049742
 -0.804464     2416.39       431.494           -74.4603     -14.8766   
 -0.143681      431.494       93.51             -9.5062      -2.71595  
 -8.27622e-6      0.0247943    0.00316789       -1.1759      -0.207009 
  0.0248542     -74.4603      -9.5062         3532.88       621.752    
  0.0049742     -14.8766      -2.71595     …   621.752      126.779    

4. Use $\tau_{int}$, binning, and jacknife resampling to get an error estimate on the pressure.

Recall:

$$ \frac{P}{\rho kT} = 1 - \frac{1}{6NkT} \langle \sum_{i \neq j} r_{ij} \frac{\partial{V}}{\partial{r_{ij}}} \rangle $$

which gives pressure in natural units.


In [ ]:
include("q4/p4.jl")


Part 4
Finding number of steps between independent data recordings
bp_1069 = 2 * maximum([τtemp_1069,τpe_1069,τvir_1069]) => 1.8414044185383336
bp_1304 = 2 * maximum([τtemp_1304,τpe_1304,τvir_1304]) => 1.814623147402611

	Estimating σpressure:
mdjack_1069 = jacknife(md_1069,lengthmd,int(ceil(bp_1069))) => 

Note: I tried changing what we discussed (i.e. calculating the energy by multiplying by 0.5 instead of 16), but it only gave more nonsensical looking results, so I'm keeping things as they were, since the behaviour seems qualitatively correct.