We can look at the data from within this notebook in the same way we did for Chapter 11. Let's dispense with the preliminaries:
In [1]:
# import PmagPy functions
import pmagpy.pmag as pmag
import pmagpy.ipmag as ipmag
import numpy as np # while we are at it, let's import numpy
import matplotlib.pyplot as plt # set up for plotting
%matplotlib inline
and get to it using ipmag.plot_net() with ipmag.plot_di( ).
In [2]:
data=np.loadtxt('Chapter_12/ps12-1a.di')# read in data
ipmag.plot_net(1) # plot the equal area net
ipmag.plot_di(di_block=data,title="Problem 12.1a directions") # put on the directions
This looks Fisherian to me! See how the data are pretty much symmetrical about the mean direction? But let's check with either ipmag.fishqq( ).
In [3]:
help(ipmag.fishqq)
In [5]:
ipmag.fishqq(di_block=data) # use data read in problem 1a.
Out[5]:
And it looks like at the 95% confidence level, this data set IS Fisherian.
Now let's try the other data set.
In [10]:
data=np.loadtxt('Chapter_12/ps12-1b.di')# read in data
ipmag.plot_net(1) # plot the equal area net
ipmag.plot_di(di_block=data,title="Problem 12.1b directions") # put on the directions
Whoa! This one does not look Fisherian. See how there are 'outliers' and the data are spread out more in inclination than in declination? So let's check it.
In [11]:
ipmag.fishqq(di_block=data) # use data read in problem 1a.
Out[11]:
And, yeah, this one totally failed.
Let's take a look at the data in Chapter_12/ps12-1c.di
In [24]:
data=np.loadtxt('Chapter_12/ps12-1c.di')# read in data
ipmag.plot_net(1) # plot the equal area net
ipmag.plot_di(di_block=data,title="Problem 12.1c directions") # put on the directions
And now get the Fisher mean using ipmag.fisher_mean.
In [27]:
ipmag.fisher_mean(di_block=data)
Out[27]:
Now we rotate the data to the mean declination and inclination using pmag.dodirot(). But first, a little help message would be great.
In [4]:
help(pmag.dodirot_V)
In [34]:
rotdata=pmag.dodirot_V(data,1.8,58)
ipmag.plot_net(1)
ipmag.plot_di(di_block=rotdata)
It looks like the inclinations are spread out too much. Let's see what ipmag.fishqq() has to say.
In [36]:
ipmag.fishqq(di_block=rotdata)
Out[36]:
And sure enough... the inclinations are too spread out. They are not exponentially distributed.
The best way to do a foldtest in a Jupyter Notebook is to use ipmag.bootstrap_fold_test( ):
In [38]:
help(ipmag.bootstrap_fold_test)
So we first have to read in the data and make sure it is in a suitable array, then call ipmag.bootstrap_fold_test( ).
In [39]:
fold_data=np.loadtxt('Chapter_12/ps12-2.dat')
fold_data
Out[39]:
In [40]:
ipmag.bootstrap_fold_test(fold_data)
These data are much better grouped in geographic coordinates and much worse after tilt correction. So these were magnetized after tilting.
We know what to do here:
In [189]:
di_block=np.loadtxt('Chapter_12/ps12-3.dat')# read in data
ipmag.plot_net(1) # plot the equal area net
ipmag.plot_di(di_block=di_block,title="Problem 12.3 directions") # put on the directions
To separate by polarity, one could just sort by inclination and put all the negative ones in one group and all the positive ones in the other. But this is dangerous for low inclination data (because you could easily have negative inclinations pointing north). A more general approach (which would allow southward directed declinations with positive inclinations, for example), would be to read the data into a Pandas DataFrame, calculate the principal direction of the entire dataset using pmag.doprinc() and calculate the directions rotated to that reference. Then all positive inclinations would be one polarity (say, normal) and the other would all be the reverse polarity. So, here goes.
In [190]:
help(pmag.doprinc)
In [191]:
# calculate the principle direction for the data set
principal=pmag.doprinc(di_block)
print ('Principal direction declination: ' + '%7.1f'%(principal['dec']))
print ('Principal direction inclination: ' + '%7.1f'%(principal['inc']))
Now we can use some nice Pandas functionality to assign polarity:
In [192]:
import pandas as pd
In [193]:
# make a dataframe
df=pd.DataFrame(di_block)
df.columns=['dec','inc']
# make a column with the principal dec and inc
df['principal_dec'] = principal['dec'] # assign these to the dataframe
df['principal_inc'] = principal['inc']
# make a principal block for comparison
principal_block=df[['principal_dec','principal_inc']].values
# get the angle from each data point to the principal direction
df['angle'] = pmag.angle(di_block,principal_block)
# assign polarity
df.loc[df.angle>90,'polarity'] = 'Reverse'
df.loc[df.angle<=90,'polarity'] = 'Normal'
Now that polarity is assigned using the angle from the principal component, let's filter the data by polarity and then put the data back into arrays for calculating stuff.
In [194]:
normal_data = df[df.polarity=='Normal'].reset_index(drop=True)
reverse_data = df[df.polarity=='Reverse'].reset_index(drop=True)
NormBlock=np.array([normal_data["dec"],normal_data["inc"]]).transpose()
RevBlock=np.array([reverse_data["dec"],reverse_data["inc"]]).transpose()
help(pmag.fisher_mean)
In [195]:
norm_fpars= pmag.fisher_mean(NormBlock)
print ('Mean normal declination: ' + '%7.1f'%(norm_fpars['dec']))
print ('Mean normal inclination: ' + '%7.1f'%(norm_fpars['inc']))
In [196]:
rev_fpars= pmag.fisher_mean(RevBlock)
print ('Mean reverse declination: ' + '%7.1f'%(rev_fpars['dec']))
print ('Mean reverse inclination: ' + '%7.1f'%(rev_fpars['inc']))
Now let's check if the data are Fisher distributed using our old friend ipmag.fishqq( )
In [197]:
ipmag.fishqq(di_block=di_block)
Out[197]:
The uniform null hypothesis fails at the 95\% confidence level for the normal data. So lets try pmag.dobingham() and pmag.dokent( ) for the whole dataset and the norm and reverse data, respectively. Note that dokent has a different syntax.
In [198]:
norm_kpars=pmag.dokent(NormBlock,len(NormBlock))
print (norm_kpars)
In [199]:
rev_kpars=pmag.dokent(RevBlock,len(RevBlock))
print (rev_kpars)
In [200]:
bpars=pmag.dobingham(di_block)
print (bpars)
And finally the bootstrapped means:
In [201]:
help(pmag.di_boot)
In [202]:
BnDIs=pmag.di_boot(NormBlock)
BrDIs=pmag.di_boot(RevBlock)
norm_boot_kpars=pmag.dokent(BnDIs,NN=1)
rev_boot_kpars=pmag.dokent(BrDIs,NN=1)
And now for the plots, starting with the Fisher confidence ellipses using ipmag.plot_di_mean.
In [203]:
help(ipmag.plot_di_mean)
In [204]:
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
ipmag.plot_di_mean(dec=norm_fpars['dec'],inc=norm_fpars['inc'],a95=norm_fpars['alpha95'],\
marker='*',color='blue',markersize=50)
ipmag.plot_di_mean(dec=rev_fpars['dec'],inc=rev_fpars['inc'],a95=rev_fpars['alpha95'],\
marker='*',color='blue',markersize=50)
The other ellipses get plotted with a different function, ipmag.plot_di_mean_ellipse.
In [205]:
help(ipmag.plot_di_mean_ellipse)
Bingham:
In [206]:
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
ipmag.plot_di_mean_ellipse(bpars,marker='*',color='cyan',markersize=20)
Kent:
In [207]:
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
ipmag.plot_di_mean_ellipse(norm_kpars,marker='*',color='cyan',markersize=20)
ipmag.plot_di_mean_ellipse(rev_kpars,marker='*',color='cyan',markersize=20)
And bootstrapped
In [208]:
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
ipmag.plot_di_mean_ellipse(norm_boot_kpars,marker='*',color='cyan',markersize=20)
ipmag.plot_di_mean_ellipse(rev_boot_kpars,marker='*',color='cyan',markersize=20)
The Kent ellipses do a pretty good job - as well as the bootstrapped ones.
In [209]:
help(pmag.flip)
In [210]:
norms,rev_antis=pmag.flip(di_block)
In [211]:
help(ipmag.common_mean_watson)
In [214]:
ipmag.common_mean_watson(norms,rev_antis,plot='yes')
Out[214]:
Accoding to Dr. Watson, these two modes are not drawn from the same distribution (according to watson's Vw criterion), so they fail the reversals test. Now let's try the function ipmag.reversal_test_bootstrap instead.
In [215]:
help(ipmag.reversal_test_bootstrap)
In [216]:
ipmag.reversal_test_bootstrap(di_block=di_block)
The Y components are significantly different in the reverse mode (after flipping) with respect to the normal mode. Therefore, this data set fails the reversals test. And you should definately NOT use Bingham statistics on this data set!
In [ ]: