Robert Morehead

Homework 1

1.

I got julia, julia studio and ijulia + notebook working on my system. I don't plan on using the lab computer.

2a.


In [1]:
#Code from exercise
srand(42)
N = 10000
true_mean = 10000.
y = true_mean+randn(N)

sample_mean = mean(y)
sample_var = var(y)
println("Mean: ",sample_mean,"  Varience: ",sample_var)


Mean: 9999.998011659798  Varience: 1.0199227833780644

2b.


In [2]:
function mean_and_var(y,sample_mean,sample_var) #made this a function for part 2d.
y32bit = convert(Array{Float32,1},y);
y16bit = convert(Array{Float16,1},y);
mean32,var32 = mean(y32bit),var(y32bit)
mean16,var16 = mean(y16bit),var(y16bit)

println("64 bit mean: ",sample_mean," var: ",sample_var)
println()
println("32 bit mean: ",mean32," var: ",var32)
println("dif mean: ",sample_mean-mean32," dif var: ",sample_var-var32)
println("error mean: ",(sample_mean-mean32)/sample_mean," error var: ",(sample_var-var32)/sample_var)
println()
println("16 bit mean: ",mean16," var: ",var16)
println("dif mean: ",sample_mean-mean16," dif var: ",sample_var-var16)
println("error mean: ",(sample_mean-mean16)/sample_mean," error var: ",(sample_var-var16)/sample_var)
end 

mean_and_var(y,sample_mean,sample_var)


64 bit mean: 9999.998011659798 var: 1.0199227833780644

32 bit mean: 9999.998 var: 1.0199206
dif mean: -3.521520193316974e-5 dif var: 2.1958383915610113e-6
error mean: -3.52152089351513e-9 error var: 2.152945720349752e-6

16 bit mean: 9995.983 var: 16.001316
dif mean: 4.014613222298067 dif var: -14.981393287178577
error mean: 0.0004014614020539912 error var: -14.688752453944625

The 32 bit array has a "small" difference compared to the 64 bit array (e-5, e-6), while the 16 bit array results in a very large difference. The error is increasing in the varience calculation faster because the rounding errors are being squared, also there may be catastrophic cancellation in the the varience calculation (it contains a subtraction) whin the random error is small.

2c.

Increasing N to $10^5$ should increase the errors in both 32 and 16 bits cases as there are more evaluations for rounding errors to accululate.

The relative error will be smaller in the $\mu = 10^5$ case, because $\mu$ is larger. The absolute difference should stay the same.

2d.


In [3]:
#CASE 1
#Code from exercise
srand(42)
N = 100000
true_mean = 10000.
y = true_mean+randn(N)

sample_mean = mean(y)
sample_var = var(y)


mean_and_var(y,sample_mean,sample_var)


64 bit mean: 10000.0031598515 var: 1.0088883985077686

32 bit mean: 10000.003 var: 1.0088867
dif mean: 0.0002301639997313032 dif var: 1.7035996264791464e-6
error mean: 2.301639270029202e-8 error var: 1.6885907588975298e-6

16 bit mean: 9996.01 var: 16.000065
dif mean: 3.9933942264997313 dif var: -14.991176451345748
error mean: 0.00039933929646468567 error var: -14.859102823978318

In [4]:
#CASE 2
#Code from exercise
srand(42)
N = 10000
true_mean = 100000.
y = true_mean+randn(N)

sample_mean = mean(y)
sample_var = var(y)


mean_and_var(y,sample_mean,sample_var)


64 bit mean: 99999.9980116598 var: 1.0199227833780122

32 bit mean: 100000.0 var: 1.0200287
dif mean: -0.001988340198295191 dif var: -0.00010592698728317806
error mean: -1.9883402378301593e-8 error var: -0.00010385784983873483

16 bit mean: Inf var: NaN
dif mean: -Inf dif var: NaN
error mean: -Inf error var: NaN

Ah, nice one! In case two it looks like we hit an overflow in the $\mu$ calculation. I will blame always working in double precision for not seeing that one comming.

In case one, the errors slightly improved, rather than getting worse. Maybe the rounding is calleling out, and I got lucky? A quick change in random seed doesn't see to support this. The law of large numbers at play? Maybe, the floating point errors are just another source of random error. So yeah I a going to go with that.

2e.

  1. Be aware of the range of values allowed by your precision level.
  2. Be mindful of floating point arithmitic when designing algorithms.
  3. When working with random samples, bigger is better (most of the time)

3a.


In [41]:
srand(42)
N = 1000000
true_mean = 1e6
y = true_mean+randn(N);

$m = 1/N \times \sum_{i=1}^{N} y_i$

$s^2 = 1/(N-1) \times \sum_{i=1}^N (y_i-m)^2 = 1/(N-1) \times \left[ \left( \sum_{i=1}^N y_i^2 \right) - N m^2 \right]$

Note: A quick glance at http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance cleared up my confusion on what was meant by "one-pass" vs. "two-pass".


In [5]:
function onepass(y::Array)

  N = length(y)
  sUm = zero(y[1]) #oopse this killed the built in sum function
  sUm_squared = zero(y[1])
  
  for i in 1:N
    sUm = sUm + y[i]
    sUm_squared = sUm_squared + y[i]^2
  end 
  
  return (sUm_squared - sUm^2/N)*(1/(N-1))

end


Out[5]:
onepass (generic function with 1 method)

In [6]:
onepass(y)


Out[6]:
0.9094409094409094

3b.


In [7]:
function twopass(y::Array)
  N = length(y)
  sUm = zero(y[1]) #oopse this killed the built in sum function
  sUm_squared = zero(y[1])  
  
  for i in 1:N
    sUm = sUm + y[i]
  end
  
  m = sUm/N
    
  for i in 1:N
    sUm_squared = sUm_squared + (y[i]-m)^2
  end
  return (sUm_squared)*(1/(N-1))

end


Out[7]:
twopass (generic function with 1 method)

3c.


In [8]:
println("One pass: ",onepass(y),", Two pass:", twopass(y),", Julia var(): ",var(y))
println("One pass error: ",(var(y)-onepass(y))/var(y),", Two pass error:", (var(y)-twopass(y))/var(y))


One pass: 0.9094409094409094, Two pass:1.001568444572786, Julia var(): 1.0015684445727868
One pass error: 0.0919832644799167, Two pass error:8.867875425917323e-16

The one pass has the larger errors under the given test conditions.

3e.

Well, first off you might say never use the one pass. But what about under different $N$ or $\mu$?


In [11]:
#Smaller N
srand(42)
N = 10000
true_mean = 1e6
y = true_mean+randn(N);
println("One pass: ",onepass(y),", Two pass:", twopass(y),", Julia var(): ",var(y))
println("One pass error: ",(var(y)-onepass(y))/var(y),", Two pass error:", (var(y)-twopass(y))/var(y))


One pass: 1.0147014701470147, Two pass:1.0199227833772877, Julia var(): 1.0199227833772826
One pass error: 0.005119322085323428, Two pass error:-5.0072672132735025e-15

In [13]:
#Smaller mean
srand(42)
N = 1000000
true_mean = 1e3
y = true_mean+randn(N);
println("One pass: ",onepass(y),", Two pass:", twopass(y),", Julia var(): ",var(y))
println("One pass error: ",(var(y)-onepass(y))/var(y),", Two pass error:", (var(y)-twopass(y))/var(y))


One pass: 1.001568381207053, Two pass:1.0015684445727955, Julia var(): 1.0015684445728124
One pass error: 6.326652926186049e-8, Two pass error:1.6848963309242483e-14

If you knew for sure (or made sure) were dealing with a big array of small values, then you might prefer the faster one-pass algorithm. But also the two pass algorithm has a danger of catastrophic cancellation, so the one pass may be more reliable when $y \sim \mu$.

3d.


In [14]:
function var_online(y::Array)
  n = length(y);
  sum1 = zero(y[1]);
  mean = zero(y[1]);
  M2 = zero(y[1]);
  for i in 1:n
	  diff_by_i = (y[i]-mean)/i;
	  mean = mean +diff_by_i;
	  M2 = M2 + (i-1)*diff_by_i*diff_by_i+(y[i]-mean)*(y[i]-mean); 
  end;  
  variance = M2/(n-1);
end


Out[14]:
var_online (generic function with 1 method)

In [21]:
println("One pass: ",onepass(y),", Two pass:", twopass(y),", online(): ",var_online(y))
println("One pass error: ",(var(y)-onepass(y))/var(y),", Two pass error:", (var(y)-twopass(y))/var(y),", Online error:",(var(y)-var_online(y))/var(y)  )


One pass: 0.9094409094409094, Two pass:1.001568444572786, online(): 1.001568444614351
One pass error: 0.0919832644799167, Two pass error:8.867875425917323e-16, Online error:-4.149899663066529e-11

Seems to work better than the regular onepass, but with all the subtractions there is a larger risk of catastrophic cancellation, so this may not be a good choice for narrow distributions.

4a.


In [109]:
function make_set(N,mu)
    srand(42) #Random Seed
    true_vel = zeros(N)
    m = ones(N)*2
    vel_err = randn(N).*m  #Did somebody say z-score? This takes me back to stats 101! :) 
    #0 velocity with no planet, right?
    vel_obs = true_vel+vel_err
  return vel_obs,true_vel,vel_err,m
end

y,z,err,mu = make_set(100,2)


Out[109]:
([-1.1120537522876766,-0.8887667142145911,0.054310676018293196,-0.5989681807170408,3.55572219611071,-2.289803063449925,-0.9372117643337359,0.31228692527926727,-5.2839820161513655,2.0066198029163016  …  0.2750796038556071,6.299151848253794,-1.4428243969728147,-1.154185393973309,0.9187675505910903,0.4944154121093404,-0.05132498760782233,-1.2641619360530398,-2.0408387097357954,-2.623015600406212],[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0  …  0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],[-1.1120537522876766,-0.8887667142145911,0.054310676018293196,-0.5989681807170408,3.55572219611071,-2.289803063449925,-0.9372117643337359,0.31228692527926727,-5.2839820161513655,2.0066198029163016  …  0.2750796038556071,6.299151848253794,-1.4428243969728147,-1.154185393973309,0.9187675505910903,0.4944154121093404,-0.05132498760782233,-1.2641619360530398,-2.0408387097357954,-2.623015600406212],[2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0  …  2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0])

4b.


In [108]:
# I don't like one line functions as rule.
Pnorm(y,sigma,z) = 1/sqrt(2*pi*sigma^2)*exp(-(y-z)^2/(2*sigma^2))

#does it work?
[Pnorm(y[i],2,z[i]) for i in 1:length(y)]
#probably


Out[108]:
100-element Array{Any,1}:
 0.170901  
 0.180717  
 0.199398  
 0.190723  
 0.0410699 
 0.103572  
 0.17873   
 0.197054  
 0.00608369
 0.120585  
 0.111041  
 0.196013  
 0.174414  
 ⋮         
 0.129381  
 0.0275091 
 0.197593  
 0.001399  
 0.153769  
 0.168874  
 0.179496  
 0.193468  
 0.199405  
 0.163352  
 0.118515  
 0.0844064 

4c.


In [95]:
function Pnorm_set(ys::Array,sigmas::Array,zs::Array)
  n = length(ys)
  ps = [Pnorm(ys[i],sigmas[i],zs[i]) for i in 1:n]
  return prod(ps)
end


Out[95]:
Pnorm_set (generic function with 1 method)

4d.


In [110]:
y,z,err,mu = make_set(100,2)
println(Pnorm_set(y,err,z))
y,z,err,mu = make_set(600,2)
println(Pnorm_set(y,err,z))


3.776467160177524e-61
0.0

4e.


In [114]:
function Pnorm_set_log(ys::Array,sigmas::Array,zs::Array)
  n = length(ys)
  ps = [log(Pnorm(ys[i],sigmas[i],zs[i])) for i in 1:n] # I LOVE comprehensions! 
  return sum(ps)
end


Out[114]:
Pnorm_set_log (generic function with 1 method)

Well, it can't handel big N at least.


In [117]:
y,z,err,mu = make_set(100,2)
println(Pnorm_set_log(y,err,z))
y,z,err,mu = make_set(600,2)
println(Pnorm_set_log(y,err,z))


-139.1289017137387
-895.3145308517971

Ah that is better!

4f.

When working with small-valued floats it can be useful to work in log-space.

5.

Write it in Python! (just kidding)


In [ ]: