# IPython Notebook for turning in solutions to the problems in the Essentials of Paleomagnetism Textbook by L. Tauxe

## Problem 1a

After finding the command line, you can type:

fishrot.py -n 20 -D 12 -I 45 -k 25 >ps11_prob1a.dat

and then

eqarea.py -f ps11_prob1a.dat

which, if you save it as a png and display it like this, will give you something like this: (note that these are random samples and your results will be slightly different!).



In [72]:

from IPython.display import Image
Image(filename='ps11_prob1.png')




Out[72]:



For the challenge problem, we need to call a few functions from pmag, ipmag and pmagplotlib to create the Fisher distributed set of directions and then plot them.



In [73]:

import sys
#change to match where the PmagPy folder is on your computer
sys.path.insert(0, '/Users/ltauxe/PmagPy')
import pmag,pmagplotlib,ipmag # import PmagPy functions
import numpy as np # while we are at it, let's import numpy
import matplotlib.pyplot as plt # set up for plotting
%matplotlib inline



The pmag module has two functions we can use here, fshdev(kappa), which generates fisher distributed directions from a distribution with kappa and a vertical true mean direction, and dodirot(D,I,Dbar,Ibar), which rotates a direction to a new reference direction of Dbar, Ibar.



In [74]:

def fishy(N,kappa,Dbar,Ibar):  # let's write this as a function we can re-use
"""
call to fishy(N,kappa,Dbar,Ibar) returns lists of D,I drawn from
distribution with true mean Dbar,Ibar.
"""
D,I=[],[] # set up containers for our fisher distributed Decs and Incs
for i in range(N): # do this 20 times
dec,inc=pmag.fshdev(kappa)
drot,irot=pmag.dodirot(dec,inc,Dbar,Ibar)
D.append(drot)
I.append(irot)
return np.array([D,I]) # numpy has to be imported first
N,kappa=20,25 # we want 20 directions drawn from a Fisher distribution with kappa=25.
Dbar,Ibar=12,45 # we want to rotate these from vertical to a true meanof Dbar and Ibar
DI=fishy(N,kappa,Dbar,Ibar)




In [75]:

plt.figure(num=1,figsize=(6,6),dpi=160) # set up the figure object
ipmag.plot_net(1) # plot the equal area net
ipmag.plot_di(DI[0],DI[1]) # put on the directions
plt.title("Fisher distributed directions") # give it a title




Out[75]:



## Problem 1b

Here we have to write a program to read in the data from Problem 1a and calcualate the Fisher statistics. We can re-use some of the code we wrote before (dir2cart and cart2dir), which also happen to be part of the pmag module and write a new function for doing the fisher statistics. Notice the use of a doc string (enclosed in triple quotes) at the front of the function.



In [76]:

def fisher_mean(data):
"""
calculates fisher statistics for data
"""
R = 0 # initialize R, the resultant vector length to zero
Xbar,X=[0,0,0],[]# initialize Xbar, the container for mean x,y,z
fpars={} # dictionary for the Fisher statistics
N=len(data) # number of data points incoming
if N <2: # You can't do Fisher statistics on less than 2 numbers.
return fpars
# call to dir2cart transforms directions to cartesian coordinates
X=pmag.dir2cart(data) # pmag needs to be imported before calling.
for i in range(len(X)):
for c in range(3):
Xbar[c]+=X[i][c]  # get the sum of each cartesian coordinate
for c in range(3):
R+=Xbar[c]**2  # get the sum of squares of the component sums.
R=np.sqrt(R) # take the square root
for c in range(3):
Xbar[c]=Xbar[c]/R  # normalize by the Resultant vector length
dir=pmag.cart2dir(Xbar) # convert back to directions
fpars["dec"]=dir[0]
fpars["inc"]=dir[1]
fpars["n"]=N
fpars["r"]=R
if N!=R: # if perfectly aligned kappa infinite!
k=(N-1.)/(N-R)
fpars["k"]=k
csd=81./np.sqrt(k)
else:
fpars['k']='inf'
csd=0.
b=20.**(1./(N-1.)) -1
a=1-b*(N-R)/R
if a<-1:a=-1
a95=np.arccos(a)*180./np.pi
fpars["alpha95"]=a95
fpars["csd"]=csd
if a<0: fpars["alpha95"] = 180.0
return fpars




In [77]:

fpars= fisher_mean(data)
print fpars # it isn't pretty, but it works.




{'csd': 19.171321106824763, 'k': 17.851139957008783, 'n': 20, 'r': 18.935642202920484, 'alpha95': 7.9452552853228475, 'dec': 17.364437186600135, 'inc': 45.373679701116515}



## Problem 1c

We could use fishrot.py - OR - we could use our own little function, fishy, we wrote in Problem 1a.



In [78]:

DI2=fishy(N,kappa,Dbar,Ibar)
fpars2=fisher_mean(DI2)
print fpars2




{'csd': 24.737070318626522, 'k': 10.721943405746149, 'n': 2, 'r': 1.9067333260251984, 'alpha95': 85.950009256860696, 'dec': 41.991008870420565, 'inc': 41.182838953646957}



Now we can write a little function to calculate Watson's F statistic (Equation 11.16 in the Essentials textbook).



In [79]:

def watsonsF(DI1,DI2):
# first calculate R for the combined data set, then R1 and R2 for each individually.
DI=np.concatenate((DI1,DI2),axis=0) # create a new array from two smaller ones
fpars=fisher_mean(DI) # re-use our functionfrom problem 1b
fpars1=fisher_mean(DI1)
fpars2=fisher_mean(DI2)
N=fpars['n']
R=fpars['r']
R1=fpars1['r']
R2=fpars2['r']
F=(N-2.)*((R1+R2-R)/(N-R1-R2))
return F



And now we call it.



In [80]:

# if you print out data and DI2, you will see that DI2 needs to be transposed into a 2xn array
# with decs and incs
print 'F = %10.5f'%(watsonsF(data,DI2.transpose()))




F =    1.58572



So, how do we figure out what the critical value of F is? I found this website with F tables at: http://www.socr.ucla.edu/Applets.dir/F_Table.html looking at the table for $\alpha$=0.05 (95% confidence) and 2(N-2) degrees of freedom, I found the value of 3.117. Our value above is much lower than that, so indeed these two data sets are probably drawn from the same distribution.

## Problem 1d

So here all our work will pay off, because we can just re-use the functions we already wrote.



In [81]:

DI3=fishy(N,kappa,55,60).transpose()
print 'F = %10.5f'%(watsonsF(data,DI3))  # we already did the tranpose thing




F =   19.91000



And those two are clearly NOT drawn from the same distribution!

To check with the PmagPy program watonsF.py, we need to save the data and run that program on the command line.



In [82]:

DI2=DI2.transpose() # need to transpose these before saving
np.savetxt('../Chapter_11/ps11_prob1c.dat',DI2) # saves as a text file
np.savetxt('../Chapter_11/ps11_prob1d.dat', DI3)



On the command line (in the directory with all these files), you type:

watsonsF.py -f ps11_prob1a.dat -f2 ps11_prob1c.dat

and get:

0.400958822747 3.117

These are the values for F and the critical value of F for this N.

And re-running it as:

watsonsF.py -f ps11_prob1a.dat -f2 ps11_prob1d.dat

we get:

18.3195066396 3.117

so our code is working the same way watsonsF.py works.

## Problem 1d

Now we can run PmagPy's watsonsV.py from the command line and see how the two methods compare.

typing:

watsonsV.py -f ps11_prob1a.dat -f2 ps11_prob1c.dat

and after a bunch of counting , I got this: Watson's V, Vcrit: 0.8 6.3 save the image as a png:



In [83]:

Image(filename='ps11_prob1e_1.png')




Out[83]:



Re-doing this for the other pair of files, I got these: Watson's V, Vcrit: 36.6 6.2



In [84]:

Image(filename='ps11_prob1e_2.png')




Out[84]:



So the two methods give the same answer.

## Problem 2a

For this, we re-use our function fishy above and calculate the average inclination value (using Gaussian statistics). I'm having so much fun scripting that I'm going to write my own program to average a list of numbers.



In [85]:

DI4=DI=fishy(N,kappa,0,70) # I'm using the same N and kappa as for the others
Incs=DI4[1]




In [86]:

def average(data):
Xbar=0
for i in range(len(data)):
Xbar+=data[i]
return Xbar/len(data) # that was quick




In [87]:

print average(Incs)




68.9359030043



## Problem 2b

For this, I can re-use fisher_mean from above. I like this function stuff - it saves a lot of time!



In [88]:

print fisher_mean(DI4.transpose())['inc']




76.014008853



See how the Fisher mean is steeper than the gaussian mean?

## Program 2c

Well I'm totally going to call pmag.doincfish from inside this notebook.



In [89]:

print pmag.doincfish(Incs)




{'ginc': 68.935903004336865, 'k': 39.638253261205811, 'n': 20, 'r': 19.520665053659279, 'alpha95': 0.99712943022744982, 'csd': 12.865532457616053, 'inc': 70.965903004336738}



So that program did pretty well at fixing the problem!

## Problem 3a

For this problem, we first need to read in the data file ps11_prob3a.dat from the Chapter_11 folder and peel off the declination, inclination data.



In [90]:

import pandas as pd  # let's use this cool module for wrangling data!
frame=pd.DataFrame(data,columns=['dec','inc','dip_dir','dip'])




dec   inc  dip_dir  dip
0    9.7  53.0      105   22
1  317.0  44.2      105   22
2  333.1   8.9      105   22
3  354.3  -0.1      105   22
4  338.9  50.2      105   22



And because we now know how, let's make a quickie plot.



In [91]:

DI=np.array([frame['dec'],frame['inc']]).transpose() # peel out the decs and incs and make a new array
plt.figure(num=2,figsize=(6,6),dpi=160) # set up the figure object
ipmag.plot_net(2) # plot the equal area net
ipmag.plot_di(frame['dec'],frame['inc']) # put on the directions






Now we can use the function pmag.doprinc to calculate the principal component of the directional data.



In [92]:

princ_pars=pmag.doprinc(DI)
print princ_pars




{'V3inc': 11.089856927852404, 'V3dec': 253.50393161917111, 'V2inc': 34.703324574333941, 'V2dec': 155.70255066070376, 'Edir': array([ 335.70255066,  -34.70332457,    1.        ]), 'N': 80, 'tau2': 0.23065613747170227, 'tau3': 0.046249398148129568, 'tau1': 0.72309446438016811, 'dec': 358.61577458597367, 'inc': 53.062603657226525}



Any vector with an angle of > 90 from the principal direction princ_pars['dec'] and princ_pars['inc'] belongs in the reverse polarity group. So we can use the function pmag.angle to separate the two groups. First let's paste on the principal directions to our panda frame.



In [93]:

frame['principal_dec']=princ_pars['dec'] # add the principal components to the panda frame.
frame['principal_inc']=princ_pars['inc']
print frame.head() # I love how this works




dec   inc  dip_dir  dip  principal_dec  principal_inc
0    9.7  53.0      105   22     358.615775      53.062604
1  317.0  44.2      105   22     358.615775      53.062604
2  333.1   8.9      105   22     358.615775      53.062604
3  354.3  -0.1      105   22     358.615775      53.062604
4  338.9  50.2      105   22     358.615775      53.062604



Let's find out what pmag.angle expects.



In [94]:

print pmag.angle.__doc__




call to angle(D1,D2) returns array of angles between lists of two directions D1,D2 where D1 is for example, [[Dec1,Inc1],[Dec2,Inc2],etc.]



So first we have to set up D2 with the principal directions (DI1 is already set up - it is DI). Then we call pmag.angle.



In [95]:

PD=np.array([frame['principal_dec'],frame['principal_inc']]).transpose()
frame['angle']=pmag.angle(DI,PD)




dec   inc  dip_dir  dip  principal_dec  principal_inc      angle
0    9.7  53.0      105   22     358.615775      53.062604   6.659466
1  317.0  44.2      105   22     358.615775      53.062604  28.438617
2  333.1   8.9      105   22     358.615775      53.062604  48.741348
3  354.3  -0.1      105   22     358.615775      53.062604  53.284495
4  338.9  50.2      105   22     358.615775      53.062604  12.524863



Now all we have to do is test the angle and sort the two directions into their own frames.



In [96]:

Mode_1= frame.ix[frame['angle']<=90] # these are the normal directions
Mode_2=frame.ix[frame['angle']>90] # these are the reverse directions




In [97]:

Mode_2_flip=Mode_2.copy() # make a copy of Mode_2.
Mode_2_flip['dec']=(Mode_2['dec']-180.)%360. # subtracts 180 but does modular 360 (stays 0=>360 this way.)
Mode_2_flip['inc']=-Mode_2['inc'] # take the negative of the inclination
flip_frame=pd.concat([Mode_1,Mode_2_flip]) # this is the list of all the data, flipping the reverse mode.



Now we can call our old friend fisher_mean from Problem 1b and finish the job.



In [98]:

DIflip=np.array([flip_frame['dec'],flip_frame['inc']]).transpose()
geo_stats=fisher_mean(DIflip)
# print out the statistics for the (flipped) geographic reference frame.
print 'Fisher statistics for flipped data set',geo_stats




Fisher statistics for flipped data set {'csd': 33.05703060396209, 'k': 6.004023149304996, 'n': 80, 'r': 66.842155995160553, 'alpha95': 7.0721041360252466, 'dec': 358.94196930277531, 'inc': 51.257553593765529}



Now we have to call pmag.dotilt as we did in Chapter 9, but now stepping through the pandas frame.



In [99]:

dec_rot,inc_rot=[],[]
for i in range(len(flip_frame.dec)):
drot,irot=pmag.dotilt(flip_frame.ix[i].dec,flip_frame.ix[i].inc,flip_frame.ix[i].dip_dir,flip_frame.ix[i].dip)
dec_rot.append(drot)
inc_rot.append(irot)
flip_frame['dec_rot']=dec_rot
flip_frame['inc_rot']=inc_rot



Repeat fisher_mean call with tilt corrected data:



In [100]:

DItilt=np.array([flip_frame['dec_rot'],flip_frame['inc_rot']]).transpose()
tilt_stats=fisher_mean(DItilt)
print 'Fisher statistics for Mode 1',tilt_stats




Fisher statistics for Mode 1 {'csd': 22.448281345346874, 'k': 13.019785947683799, 'n': 80, 'r': 73.932311919916472, 'alpha95': 4.5647302763823205, 'dec': 359.13686781062501, 'inc': 36.082941162612741}



To finish this off, we take the ratio of the two k values (before and after tilt) and compare with the F-table values ($\alpha$=.05 and degrees of freedom 2(N1-1) and 2(N2-1). Now you should know that there is also a pmag.fcalc()



In [101]:

print geo_stats['k']/tilt_stats['k']




0.461146072096




In [102]:

print pmag.fcalc(2*geo_stats['n']-1,2*tilt_stats['n'])




1.6366



So our F value would pass the McElhinny fold test.