The Magnetics Information Consortium (MagIC) maintains a database of published rock and paleomagnetic data: https://www.earthref.org/MagIC
Many PmagPy scripts are designed to work with data in the MagIC format. This notebook uses Data Model 3.0: https://www.earthref.org/MagIC/data-models/3.0
There are nine basic tables: contribution, locations, sites, samples, specimens, measurements, criteria, ages and images. These are tab delimited data tables with the first line consisting of a delimiter and the table name: (e.g., tab measurements). All of the examples here are tab delimited. The second line contains the column names: (e.g., specimen experiment method_codes treat_temp.....). Each subsequent line is a single record.
See the first few lines of this sample file below:
In [1]:
with open('data_files/3_0/McMurdo/samples.txt') as f:
for line in f.readlines()[:3]:
print(line, end="")
The notebook is one of a series of notebooks that demonstrate the functionality of PmagPy. The other notebooks are:
If you want to make changes to this notebook, you should make a copy (see File menu). Otherwise each time you update PmagPy, your changes will be overwritten.
To use the functions in this notebook, we have to import the PmagPy modules pmagplotlib, pmag and ipmag and some other handy functions for use in the notebook. This is done in the following code block which must be executed before running any other code block. To execute, click on the code block and then click on the "Run" button in the menu.
In order to access the example data, this notebook is meant to be run in the PmagPy-data directory (PmagPy directory for developers).
Try it! Run the code block below (click on the cell and then click 'Run'):
In [1]:
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import pmagpy.contribution_builder as cb
from pmagpy import convert_2_magic as convert
import matplotlib.pyplot as plt # our plotting buddy
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
# test if Basemap and/or cartopy is installed
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
# test if xlwt is installed (allows you to export to excel)
try:
import xlwt
has_xlwt = True
except ImportError:
has_xlwt = False
# This allows you to make matplotlib plots inside the notebook.
%matplotlib inline
from IPython.display import Image
import os
print('All modules imported!')
MagIC formatted data files can be imported to a notebook in one of two ways: a
importing to a list of dictionaries using the pmag.magic_read() function.
In this notebook, we generally read MagIC tables into a Pandas Dataframe with a command like:
meas_df = pd.read_csv('MEASUREMENTS_FILE_PATH',sep='\t',header=1)
These data can then be manipulated with Pandas functions (https://pandas.pydata.org/)
In [3]:
meas_df=pd.read_csv('data_files/3_0/McMurdo/measurements.txt',sep='\t',header=1)
meas_df.head()
Out[3]:
Alternatively, the user may wish to use a list of dictionaries compatible with many pmag functions. For that, use the pmag.magic_read() function:
In [4]:
help (pmag.magic_read)
In [5]:
meas_dict,file_type=pmag.magic_read('data_files/3_0/McMurdo/measurements.txt')
print (file_type)
print (meas_dict[0])
In [6]:
help(pmag.magic_write)
In [7]:
meas_dicts = meas_df.to_dict('records')
pmag.magic_write('my_measurements.txt', meas_dicts, 'measurements')
Out[7]:
[MagIC Database] [command line version]
MagIC tables have many columns only some of which are used in a particular instance. So combining files of the same type must be done carefully to ensure that the right data come under the right headings. The program combine_magic can be used to combine any number of MagIC files from a given type.
It reads in MagIC formatted files of a common type (e.g., sites.txt) and combines them into a single file, taking care that all the columns are preserved. For example, if there are both AF and thermal data from a study and we created a measurements.txt formatted file for each, we could use combine_magic.py on the command line to combine them together into a single measurements.txt file. In a notebook, we use ipmag.combine_magic().
In [8]:
help(ipmag.combine_magic)
Here we make a list of names of two MagIC formatted measurements.txt files and use ipmag.combine_magic() to put them together.
In [9]:
filenames=['data_files/combine_magic/af_measurements.txt','../combine_magic/therm_measurements.txt']
outfile='data_files/combine_magic/measurements.txt'
ipmag.combine_magic(filenames,outfile)
Out[9]:
Files downloaded from the MagIC search interface have ages that are in the original units, but what is often desired is for them to be in a single unit. For example, if we searched the MagIC database for all absolute paleointensity data (records with method codes of 'LP-PI-TRM') from the last five million years, the data sets have a variety of age units. We can use pmag.convert_ages() to convert them all to millions of years.
First we follow the instructions for unpacking downloaded files in download_magic.
In [10]:
ipmag.download_magic('magic_downloaded_rows.txt',dir_path='data_files/convert_ages/',
input_dir_path='data_files/convert_ages/')
Out[10]:
After some minimal filtering using Pandas, we can convert a DataFrame to a list of dictionaries required by most PmagPy functions and use pmag.convert_ages() to convert all the ages. The converted list of dictionaries can then be turned back into a Pandas DataFrame and either plotted or filtered further as desired.
In this example, we filter for data older than the Brunhes (0.78 Ma) and younger than 5 Ma, then plot them against latitude. We can also use vdm_b to plot the intensities expected from the present dipole moment (~80 ZAm$^2$).
In [11]:
help(pmag.convert_ages)
In [12]:
# read in the sites.txt file as a dataframe
site_df=pd.read_csv('data_files/convert_ages/sites.txt',sep='\t',header=1)
# get rid aof any records without intensity data or latitude
site_df=site_df.dropna(subset=['int_abs','lat'])
# Pick out the sites with 'age' filled in
site_df_age=site_df.dropna(subset=['age'])
# pick out those with age_low and age_high filled in
site_df_lowhigh=site_df.dropna(subset=['age_low','age_high'])
# concatenate the two
site_all_ages=pd.concat([site_df_age,site_df_lowhigh])
# get rid of duplicates (records with age, age_high AND age_low)
site_all_ages.drop_duplicates(inplace=True)
# Pandas reads in blanks as NaN, which pmag.convert_ages hates
# this replaces all the NaNs with blanks
site_all_ages.fillna('',inplace=True)
# converts to a list of dictionaries
sites=site_all_ages.to_dict('records')
# converts the ages to Ma
converted_df=pmag.convert_ages(sites)
# turn it back into a DataFrame
site_ages=pd.DataFrame(converted_df)
# filter away
site_ages=site_ages[site_ages.age.astype(float) <= 5]
site_ages=site_ages[site_ages.age.astype(float) >=0.05]
Let's plot them up and see what we get.
In [13]:
plt.plot(site_ages.lat,site_ages.int_abs*1e6,'bo')
# put on the expected values for the present dipole moment (~80 ZAm^2)
lats=np.arange(-80,70,1)
vdms=80e21*np.ones(len(lats))
bs=pmag.vdm_b(vdms,lats)*1e6
plt.plot(lats,bs,'r-')
plt.xlabel('Latitude')
plt.ylabel('Intensity ($\mu$T)')
plt.show()
That is pretty awful agreement. Someday we need to figure out what is wrong with the data or our GAD hypothesis.
[MagIC Database] [command line version]
Sometimes you want to read in a MagIC file and print out the desired key. Pandas makes this easy! In this example, we will print out latitudes for each site record.
In [14]:
sites=pd.read_csv('data_files/download_magic/sites.txt',sep='\t',header=1)
print (sites.lat)
[MagIC Database] [command line version]
This example demonstrates how to select MagIC records that meet a certain criterion, like having a particular method code.
Note: to output into a MagIC formatted file, we can change the DataFrame to a list of dictionaries (with df.to_dict("records")) and use pmag.magic_write()..
In [15]:
help(pmag.magic_write)
In [16]:
# read in the data file
spec_df=pd.read_csv('data_files/magic_select/specimens.txt',sep='\t',header=1)
# pick out the desired data
method_key='method_codes' # change to method_codes for data model 3
spec_df=spec_df[spec_df.method_codes.str.contains('LP-DIR-AF')]
specs=spec_df.to_dict('records') # export to list of dictionaries
success,ofile=pmag.magic_write('data_files/magic_select/AF_specimens.txt',specs,'pmag_specimens') # specimens for data model 3.0
It is frequently desirable to format tables for publications from the MagIC formatted files. This example is for the sites.txt formatted file. It will create a site information table with the location and age information, and directions and/or intenisty summary tables. The function to call is ipmag.sites_extract().
In [17]:
help(ipmag.sites_extract)
Here is an example for how to create Latex files:
In [18]:
#latex way:
ipmag.sites_extract(directions_file='directions.tex',intensity_file='intensities.tex',
output_dir_path='data_files/3_0/McMurdo',info_file='site_info.tex',latex=True)
Out[18]:
And here is how to create Excel files:
In [19]:
#xls way:
if has_xlwt:
print(ipmag.sites_extract(output_dir_path='data_files/3_0/McMurdo'))
In [20]:
help(ipmag.criteria_extract)
In [21]:
# latex way:
ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo',
latex=True,output_file='criteria.tex',)
Out[21]:
In [22]:
#xls way:
if has_xlwt:
print(ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo'))
In [23]:
help(ipmag.specimens_extract)
In [24]:
#latex way:
ipmag.specimens_extract(output_file='specimens.tex',landscape=True,
output_dir_path='data_files/3_0/Megiddo',latex=True,longtable=True)
Out[24]:
In [25]:
#xls way:
if has_xlwt:
print(ipmag.specimens_extract(output_dir_path='data_files/3_0/Megiddo'))
[MagIC Database] [command line version]
The programs ipmag.download_magic_from_id() and ipmag.download_magic_from_doi() download files from the MagIC website and ipmag.download_magic( ) unpacks the downloaded .txt file into individual text files. This program has an option to also separate the contribution into separate folders for each location.
As an example, download the cotribution with the DOI of 10.1029/2019GC008479. Make a folder into which you should put the downloaded txt file (called "magic_contribution.txt") into it. Then use ipmag.download_magic to unpack the .txt file (magic_contribution.txt).
Here is an example.
In [2]:
help(ipmag.download_magic_from_doi)
In [3]:
help(ipmag.download_magic)
And here we go...
In [4]:
dir_path='data_files/download_magic' # set the path to the correct working directory
reference_doi='10.1029/2019GC008479' # set the reference DOI
magic_contribution='magic_contribution.txt' # default filename for downloaded file
ipmag.download_magic_from_doi(reference_doi)
os.rename(magic_contribution, dir_path+'/'+magic_contribution)
ipmag.download_magic(magic_contribution,dir_path=dir_path,print_progress=False)
Out[4]:
In [5]:
help(ipmag.download_magic_from_id)
In [6]:
dir_path='data_files/download_magic' # set the path to the correct working directory
magic_id='16676' # set the magic ID number
magic_contribution='magic_contribution_'+magic_id+'.txt' # set the file name string
ipmag.download_magic_from_id(magic_id) # download the contribution from MagIC
os.rename(magic_contribution, dir_path+'/'+magic_contribution) # move the contribution to the directory
ipmag.download_magic(magic_contribution,dir_path=dir_path,print_progress=False) # unpack the file
Out[6]:
You could look at these data with dmag_magic for example... (see the PmagPy_plots_analysis notebook).
[MagIC Database] [command line version]
We can just turn around and try to upload the file downloaded in download_magic. For this we use ipmag.upload_magic() in the same directory as for the download. You can try to upload the file you create to the MagIC data base as a private contribution here: https://www2.earthref.org/MagIC/upload
In [28]:
help(ipmag.upload_magic)
In [29]:
ipmag.upload_magic(dir_path='data_files/download_magic',concat=True)
Out[29]:
If this were your own study, you could now go to https://earthref.org/MagIC and upload your contribution to a Private Workspace, validate, assign a DOI and activate!
MagIC data model 3 took out redundant columns in the MagIC tables so the hierarchy of specimens (in the measurements and specimens tables) up to samples, sites and locations is lost. To put these back into the measurement table, we have the function cb.add_sites_to_meas_table(), which is super handy when data analysis requires it.
In [30]:
help(cb.add_sites_to_meas_table)
In [31]:
status,meas_df=cb.add_sites_to_meas_table('data_files/3_0/McMurdo')
meas_df.columns
Out[31]:
In [32]:
help(cb.get_intensity_col)
In [33]:
magn_col=cb.get_intensity_col(meas_df)
print (magn_col)
[MagIC Database] [command line version]
To convert the binary formatted 2G Enterprises measurement files, we can use the function convert._2g_bin() in the convert_2_magic module (imported as convert).
In [34]:
help(convert._2g_bin)
In [35]:
# set the input directory
input_dir='data_files/convert_2_magic/2g_bin_magic/mn1/'
mag_file='mn001-1a.dat'
convert._2g_bin(mag_file=mag_file,input_dir=input_dir,dir_path=input_dir)
Out[35]:
These are measurement data for a single specimen, so we can take a quickie look at the data in an equal area projection.
In [36]:
help(ipmag.plot_di)
In [37]:
meas_df=pd.read_csv(input_dir+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])
[MagIC Database] [command line version]
This program converts Micromag hysteresis files into MagIC formatted files. Because this program creates files for uploading to the MagIC database, specimens should also have sample/site/location information, which can be provided on the command line. If this information is not available, for example if this is a synthetic specimen, specify syn= True for synthetic name.
Someone named Lima Tango has measured a synthetic specimen named myspec for hysteresis and saved the data in a file named agm_magic_example.agm in the agm_magic/agm_directory folder. The backfield IRM curve for the same specimen was saved in same directory as agm_magic_example.irm. Use the function convert.agm() to convert the data into a measurements.txt output file. For the backfield IRM file, set the keyword "bak" to True. These were measured using cgs units, so be sure to set the units key word argument properly. Combine the two output files together using the instructions for combine_magic. The agm files can be plotted using hysteresis_magic but the back-field plots are broken.
In [38]:
help(convert.agm)
In [39]:
convert.agm('agm_magic_example.agm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='agm.magic')
Out[39]:
In [40]:
convert.agm('agm_magic_example.irm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='irm.magic')
Out[40]:
In [41]:
infiles=['data_files/convert_2_magic/agm_magic/agm.magic','data_files/convert_2_magic/agm_magic/irm.magic']
ipmag.combine_magic(infiles,'data_files/convert_2_magic/agm_magic/measurements.txt')
Out[41]:
We can look at these data using hysteresis_magic:
In [42]:
# read in the measurements data
meas_data=pd.read_csv('data_files/convert_2_magic/agm_magic/agm.magic',sep='\t',header=1)
# pick out the hysteresis data using the method code for hysteresis lab protocol
hyst_data=meas_data[meas_data.method_codes.str.contains('LP-HYS')]
# make the dictionary for figures that pmagplotlib likes
# make a list of specimens
specimens=hyst_data.specimen.unique()
cnt=1
for specimen in specimens:
HDD={'hyst':cnt,'deltaM':cnt+1,'DdeltaM':cnt+2}
spec_data=hyst_data[hyst_data.specimen==specimen]
# make a list of the field data
B=spec_data.meas_field_dc.tolist()
# make a list o the magnetizaiton data
M=spec_data.magn_moment.tolist()
# call the plotting function
hpars=pmagplotlib.plot_hdd(HDD,B,M,specimen)
hpars['specimen']=specimen
# print out the hysteresis parameters
print (specimen,': \n',hpars)
cnt+=3
[MagIC Database] [command line version]
Here we convert the Berkeley Geochronology Center's AutoCore format to MagIC use convert.bgc().
In [43]:
help(convert.bgc)
In [44]:
dir_path='data_files/convert_2_magic/bgc_magic/'
convert.bgc('15HHA1-2A',dir_path=dir_path)
Out[44]:
And let's take a look
In [45]:
meas_df=pd.read_csv(dir_path+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])
[MagIC Database] [command line version]
To convert the CalTech format to MagIC, use convert.cit().
Craig Jones’ PaleoMag software package (http://cires.colorado.edu/people/jones.craig/PMag3.html) imports various file formats, including the ’CIT’ format developed for the Caltech lab and now used in magnetometer control software that ships with 2G magnetometers that utilized a vertical sample changer system. The documentation for the CIT sample format is here: http://cires.colorado.edu/people/jones.craig/PMag_Formats.html#SAM_format. Demagnetization data for each specimen are in their own file in a directory with all the data for a site or study. These files are strictly formatted with fields determined by the character number in the line. There must be a file with the suffix ‘.sam’ in the same directory as the specimen data files which gives details about the specimens and a list of the specimen measurementfiles in the directory.
The first line in the .sam file is a comment (in this case the site name), the second is the latitude and longitude followed by a declination correction. In these data, the declination correction was applied to the specimen orientations so the value of the declination correction is set to be 0.
For detailed description of the .sam and sample file formats, check the PaleoMag Formats website linked to above.
In [46]:
help(convert.cit)
Use the function convert.cit() to covert the CIT data files from Swanson-Hysell lab at Berkeley for the PI47 site in the data_files/convert_2_magic/cit_magic/PI47 directory. The site (PI47) was part of a data set published in Fairchild et al., (2016) (available in the MagIC database: (https://earthref.org/MagIC/11292/). The location name was “Slate Islands”, the naming convention was #2, the specimen name is specified with 1 character, we don’t wish to average replicate measurements and they were collected by drilling and with a magnetic compass (”FS-FD",and "SO-MAG”).
In [47]:
dir_path='data_files/convert_2_magic/cit_magic/PI47/'
convert.cit(dir_path=dir_path,
magfile='PI47-.sam',locname="Slate Islands",specnum=1,samp_con='2',
methods=['FS-FD','SO-MAG'],noave=True)
Out[47]:
We can make some Zijderveld diagrams (see zeq_magic in the PmagPy_plots_analysis notebook).
In [48]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
Out[48]:
Use the function convert.cit() to covert the CIT data files from the USGS lab at Menlo Park. The data file is in the data_files/convert_2_magic/cit_magic/USGS/bl9-1 directory, the file name is bl9-1.sam, and the analyst was Hagstrum. The location name was “Boring volcanic field”, and this site name was set by Hagstrum to BL9001 because the site name cannot be determined from the sample name with the current available options. The samples were collected by drilling and with a magnetic compass and sun compass (”FS-FD",and "SO-MAG”), the measurement are in Oersted instead of the standard milliTesla, and we don’t wish to average replicate measurements.
In [49]:
dir_path='data_files/convert_2_magic/cit_magic/USGS/bl9-1'
convert.cit(dir_path=dir_path,
magfile='bl9-1.sam',user='Hagstrum',locname="Boring volcanic field",
sitename='BL9001',methods=['FS-FD','SO-SM','LT-AF-Z'], oersted=True,
noave=True)
Out[49]:
We can look at the Zijderveld, etc. Diagrams with zeq_magic.
In [50]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
Out[50]:
Use the function convert.cit() to convert the CIT data files from Ben Wiess's lab at MIT. This data was part of a set published in ESPL. "A nonmagnetic differentiated early planetary body", doi:10.1016/j.epsl.2017.03.026 The data can be found in MagIC at https://earthref.org/MagIC/11943
The data file is in the data_files/convert_2_magic/cit_magic/MIT/7325B directory, the file name is 7325B.sam, and the analyst was Wiess. The location name was “NWA 7325” with the site name coming from the sample name with the "1" convention. The samples are described with the method codes DE-VM, LP-DIR-T, LT-AF-Z, LT-NO, LT-T-Z, and SO-CMD-NORTH (see https://www2.earthref.org/MagIC/method-codes for full descriptions). We also don’t wish to average replicate measurements.
In [51]:
convert.cit(dir_path='data_files/convert_2_magic/cit_magic/MIT/7325B',
magfile='7325B.sam',user='Wiess',locname="NWA 7325",samp_con='1',
methods=['DE-VM', 'LP-DIR-T', 'LT-AF-Z', 'LT-NO', 'LT-T-Z', 'SO-CMD-NORTH'],
noave=True)
Out[51]:
And take a look see:
In [52]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
Out[52]:
[MagIC Database] [command line version]
If you have a data file format that is not supported, you can relabel column headers to fit the generic format as in the generic_magic example data file.
To import the generic file format, use convert.generic().
In [53]:
help(convert.generic)
In [54]:
convert.generic(magfile='data_files/convert_2_magic/generic_magic/generic_magic_example.txt',
experiment='PI',dir_path='data_files/convert_2_magic/generic_magic')
Out[54]:
In [55]:
# let's take a look
dir_path='data_files/convert_2_magic/generic_magic/'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
Out[55]: