First let's set things up for business.
In [10]:
import pmagpy.ipmag as ipmag
import pmagpy.pmag as pmag
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
We need to use the function pmag.bc02 to calculate the expected direction for the Mojave Desert (34.5N,117W). So let's remind ourselves about how it works.
In [55]:
help(pmag.apwp)
Don't forget the minus sign on the longitude!
In [57]:
data=['NA',34.5,-117.5,20]
pmag.apwp(data,print_results=True)
I went to http://earthref.org/MagIC/search and filled in the search criteria as requested and downloaded a file in a Project Directory called 'MojaveDesert'. Now we can unpack it with ipmag.download_magic as we did in Chapter_15.
In [9]:
help(ipmag.download_magic)
In [15]:
ipmag.download_magic('magic_downloaded_rows.txt',input_dir_path='Chapter_16/Problem_1',\
dir_path='Chapter_16/Problem_1')
Out[15]:
We are asked to make an equal area plot of all the Miocene poles, so we will have to read the data in plot those.
I know by now (and also by reading the information on the MagIC database in the PmagPy cookbook (http://earthref.org/PmagPy/cookbook) and in the data model link on the MagIC website that the locations.txt tables are tab delimited with column headers on the second line. So, I'll read in the sites table into a Pandas DataFrame and wrangle that.
In [23]:
file='Chapter_16/Problem_1/locations.txt' # make a filename
alldata=pd.read_csv(file,sep='\t',header=1)
alldata.columns
Out[23]:
In [19]:
alldata.dir_tilt_correction
Out[19]:
Looks like we have some filtering to do. First, we should use only the tilt corrected data (dir_tilt_correction=100).
In [20]:
data=alldata[alldata.dir_tilt_correction==100]
And we should verify that the data fall within our bounds:
In [24]:
print (data.lat_s.min(),data.lat_n.max(),data.lon_w.min(),data.lon_e.max())
print (data.age_low.min(),data.age_high.max())
Checks out. So, let's plot it up
In [28]:
ipmag.plot_net(1) # make an equal angle net
ipmag.plot_di(data.dir_dec.values,data.dir_inc.values,color='red')
ipmag.plot_di(ExpDec,ExpInc,color='black',marker='*',markersize=100)# put on the Expected normal direction
ipmag.plot_di((ExpDec-180)%360,-ExpInc,color='black',marker='*',markersize=100)
Wow - that looks weird - so I'm going to plot these in 5 Ma age groups. So let's look at the ages
In [29]:
plt.hist(data.age_low)
plt.hist(data.age_high);
In [30]:
young=data[data.age_low<15]
mid=data[(data.age_low>=15)&(data.age_low<20)]
old=data[(data.age_low>=20) & (data.age_low<25)]
In [31]:
ipmag.plot_net(1) # make an equal angle net
ipmag.plot_di(ExpDec,ExpInc,color='black',marker='*',markersize=100,legend='yes',label='Expected')# put on the Expected normal direction
ipmag.plot_di((ExpDec-180)%360,-ExpInc,color='black',marker='*',markersize=100)
ipmag.plot_di(young.dir_dec.values,young.dir_inc.values,color='blue',legend='yes',label='Young')
ipmag.plot_di(mid.dir_dec.values,mid.dir_inc.values,color='green',legend='yes',label='Mid')
ipmag.plot_di(old.dir_dec.values,old.dir_inc.values,color='darkorange',legend='yes',label='Old')
So... between ~15 to 25 Ma there must have been some rotation going on. But not everywhere! In the Chapter, there is a discussion about 'displaced terranes'. Perhaps the Mojave is one of those. At least it appears to have undergone significant vertical axis rotation! Or got hit by lightning or something...
Site 522 was drilled on the African plate and ranged in age from 23 to 35 Ma (or so). So I should use pmag.apwp
In [58]:
data=['AF',-26,-5,33]
pmag.apwp(data,print_results=True)
In [59]:
data=['AF',-26,-5,23]
pmag.apwp(data,print_results=True)
We'll have to read the data for Chapter 15, Problem 1 in here, assuming they've been unpacked (again - see solutions to Chapter 15), and calculate averages at the top and the bottom.
In [60]:
specimens=pd.read_csv('Chapter_15/Problem_1/specimens.txt',sep='\t',header=1)
specimens=specimens.dropna(subset=['analysts']) # kill unwanted lines with duplicate or irrelevent info
specimens['site']=specimens['specimen'] # make a column with site name
sites=pd.read_csv('Chapter_15/Problem_1/sites.txt',sep='\t',header=1)
data=pd.merge(specimens,sites,on='site') # merge the two data frames on site
MaxDepth=data.core_depth.max()
MinDepth=data.core_depth.min()
print (MaxDepth,MinDepth)
I want to take the top and bottom 10 meters.
In [44]:
TopIncs=data[data.core_depth<65].dir_inc.values
BottomIncs=data[data.core_depth>136].dir_inc.values
print ('Top 10 m average inclination: ',pmag.doincfish(TopIncs))
print ('Bottom 10 m average inclination: ',pmag.doincfish(BottomIncs))
Wow. These data (50 and 54 respectively) match really well with the Besse and Courtillot (2002) predictions of 53 and 55 respectively. In fact, they used the Site 522 data to help make their APWPs!
I looked up in the Appendix and found the rotation pole for North America to South Africa at 90 Ma to be $\lambda$=74.6, $\phi$=-23, and $\Omega$ = 33.8. And we were told the North American pole to rotate: $\lambda=75.2^{\circ}, \phi=201^{\circ}$.
In [11]:
import pmagpy.frp as frp
help(frp.get_pole)
In [47]:
Prot=frp.get_pole('nam',90)
In [50]:
Prot=[74.6,-23,33.8] # finite pole of rotation
plat,plon=75.2,201
Now for the pmag.pt_rot bit.
In [51]:
help(pmag.pt_rot)
In [52]:
print (pmag.pt_rot(Prot,[plat],[plon]))
The African pole is $\lambda=66.8^{\circ}, \phi=244.5^{\circ}$, and we got 67.1$^{\circ}$ and 244.4$^{\circ}$. Hot diggety dog. (Now you know how old I am.)
I went to the MagIC search engine and downloaded late Cretaceous data from Europe as instructed in the book. I saved both the sites and the locations tables. Now I'll unpack the data files with ipmag.download_magic
In [2]:
ipmag.download_magic('magic_downloaded_rows.txt',dir_path='Chapter_16/Problem_4',\
input_dir_path='Chapter_16/Problem_4')
Out[2]:
So, let's take a look at what we have by reading the location.txt file into a Pandas DataFrame and finding the method codes available.
In [2]:
poles=pd.read_csv('Chapter_16/Problem_4/locations.txt',sep='\t',header=1)
poles.method_codes.unique()
Out[2]:
Now let's fish out the "best" poles with LP-DC4 or LP-DC5.
In [3]:
dc4=poles[poles.method_codes.str.contains('DC4')]
dc5=poles[poles.method_codes.str.contains('DC5')]
best_poles=pd.concat([dc4,dc5])
Some of the data were included in other results, were overprints, or were otherwise superceded. These details are in a comment field called description. So let's read these descriptions and figure out how to eliminate the ones we don't want.
In [4]:
for ind in best_poles.index:
print (ind, best_poles.loc[ind].description)
Now we want to thumb through the descriptions and eliminate the kind we don't want. For example, if we make a list of unwanted descriptors (verprint, emagneti, uperseded, Data incl. in RESULTNOS), from the above list, we can eliminate those.
In [5]:
Unwanted=['verprint','econdary','emagneti','uperseded','Data incl. in RESULT'] # verprint includes both Overprint and overprint
# make a column for "Use" in best_poles and set to True
best_poles['Use']=True
# now go through and flag the bad boys
for deleteme in Unwanted:
best_poles.loc[best_poles.description.str.contains(deleteme),'Use']='False'
selected_poles=best_poles[best_poles.Use==True]
for ind in selected_poles.index:
print (ind, selected_poles.loc[ind].description)
Here's a list of all the locations we have
In [6]:
selected_poles.location
Out[6]:
Oops. Looks like we have a lot of duplicates in our 'selected' pole list. So to verify, let's look at the latitudes and longitudes:
In [7]:
selected_poles[['pole_lat','pole_lon']]
Out[7]:
And yeah, there are a lot of duplicates. So we want a unique list of poles. I can use the Pandas drop_duplicates function!
In [8]:
selected_poles=selected_poles.drop_duplicates(['pole_lat','pole_lon'])
selected_poles[['pole_lat','pole_lon']]
Out[8]:
Now we have a pared down list of poles.
We have to use the functions frp.get_pole and pmag.pt_rot as in Problem 3.
In [12]:
help(frp.get_pole)
In [13]:
Prot_eur_af=frp.get_pole('eur',100)
Prot_nam_af=frp.get_pole('nam',100)
print (Prot_eur_af,Prot_nam_af)
In [14]:
help(pmag.pt_rot)
In [15]:
lats=selected_poles.pole_lat.values # array of pole latitudes for Europe
lons=selected_poles.pole_lon.values # same for longitudes
RLats_rot2af,RLons_rot2af=pmag.pt_rot(Prot_eur_af,lats,lons) # rotate to fixed Africa
Prot_af_nam=Prot_nam_af[:] # make a copy
Prot_af_nam[2]=-Prot_af_nam[2] # quick change of rotation sense (from AF to NAM)
#rotate to North America
RLats_rot2nam,RLons_rot2nam=pmag.pt_rot(Prot_af_nam,RLats_rot2af,RLons_rot2af)
We can plot that file using ipmag.plot_vgp function (similar to in Chapter 14). So, we need to import the Basemap function from mpl_toolkits (you might have to install this).
Now we need to do two things: 1) read in the file _selectedpoles.rot and 2) figure out what the pole for North America was. First the latter. I can use the program apwp.py for this, but I have to put in dummy lat and lon keys and ignore the Paleolat, Dec, Inc output and zoom in on the Pole_lat Pole_long output.
In [16]:
help(pmag.apwp)
In [17]:
pole_stuff=pmag.apwp(['NA',33,-117,100])
NApole_lat,NApole_lon=pole_stuff[4],pole_stuff[5]
print (NApole_lat,NApole_lon)
So the North American pole was 81.5N/198.3E at 100 Ma.
And we plot the stuff. Here are examples from the (old, deprecated) basemap way and the new cartopy way:
In [18]:
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
In [83]:
if not has_basemap:
print('You will need to correctly install basemap in order to continue this cell')
else:
m=Basemap(projection='ortho',lat_0=90,lon_0=0,resolution='c')
plt.figure(num=3,figsize=(6,6))
m.drawcoastlines()
m.fillcontinents(color='bisque')
m.drawmeridians(np.arange(0,360,30))
m.drawparallels(np.arange(-60,90,30))
ipmag.plot_vgp(m,RLons_rot2nam,RLats_rot2nam,\
color='b',label='rotated European poles',legend='yes')
ipmag.plot_vgp(m,[NApole_lon],[NApole_lat], color='r',\
label='NA Pole',marker='^',legend='yes')
In [35]:
if not has_cartopy:
print('You will need to correctly install cartopy in order to continue this cell')
else:
map_axis =ipmag.make_orthographic_map(central_latitude=90,central_longitude=0,figsize=(6,6),land_color='bisque')
ipmag.plot_vgp(map_axis, vgp_lon=[NApole_lon], vgp_lat= [NApole_lat] ,color='red',\
markersize=30, legend=True,label='NA Pole')
ipmag.plot_vgp(map_axis, vgp_lon=RLons_rot2nam, vgp_lat= RLats_rot2nam , color='blue',\
markersize=30, legend=True,label='rotated European poles')
Boy that is super crappy agreement! The European poles are pretty scattered and a lot are 'far sided'. A lot of them are sediments. I wonder whether we have an 'inclination shallowing' problem. You can't tell because we only have the poles - not the original data. So someone should get to work on this problem.
For this we need the blessed continents.get_continent and pmagplotlib.plot_map functions. So first, let's take it them for a spin.
In [36]:
import pmagpy.continents as continent
help (continent.get_continent)
In [37]:
import pmagpy.pmagplotlib as pmagplotlib
help(pmagplotlib.plot_map)
Here is a little script that will do it.
In [41]:
conts=['eur','nam','af','sam','ind','aus','ant'] # set up a list
plt.figure(1,(10,10))
cnt=0
syms=['r.','y.','c.','b.','g.','m.','k.'] # symbols for continents
Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360., \
'lon_0': 10,'proj': 'moll', 'sym': 'r.', 'symsize': 3,\
'pltgrid': False, 'res': 'c', 'boundinglat': 0.}
for cont in conts:
Opts['sym']=syms[cnt] # set line color
cnt+=1 # increment symbol
cont_pts=continent.get_continent(cont).transpose() # get the continent points
#rotpol=frp.get_pole(cont,10) # get the rotation pole to fixed africa
#rot_pts=pmag.pt_rot(rotpol,cont_pts[0],cont_pts[1])
rot_pts=cont_pts
if has_cartopy:
pmagplotlib.plot_map(1,rot_pts[0],rot_pts[1],Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1,rot_pts[0],rot_pts[1],Opts)
Well, it is okay... But we really should be using GPlates, which is beautiful and very functional, but for the purpose of learning about these things, this approach is fine.