The code presented in this paper was created in order to address needs associated with processing ambient noise methods in seismology. The code is largely based on several python packages that already exist to assist with seismology. Though many non-standard python packages are used, specifically Obspy, MSNoise and Seismic-Noise-Tomography (insert code references here) are the building blocks for SeisSuite. Other ambient noise seismology python packages have also been developed, such as ANTS and Whisper (insert code references here) but for reasons outlined below, these were are not ideal for the user and were not chosen as base-line codes to build upon. The need for SeisSuite is also addressed, as existing programmes for processing ambient noise seismic methods have their pitfalls that SeisSiute has attempted to address.
SeisSuite builds upon key principals for improving ambient noise processing methods. These principals are:
First, the programme had to be open-source for help with development, as well as allowing it to be a widely acceptable tool for seismologists to use in the future.
Second, the code was written with ease of installation and processing set-up in mind.
Third, rapid and accessible development of new tools for other users to add if they so desired.
Fourth, the ability to explore and expand upon the ideas of ambient noise seismology station site survey design and optimisation.
SeisSuite was developed because a need was sought for a more generalised, easy installation and relatively rapid additional development ambient noise seismology programme. Below the problems with existing ambient noise seismology programmes is identified which led to the conclusion that another additional set of codes were necessary.
MSNoise
The Whisper Project
ANTS
Seismic-Noise-Tomography
Additional individual projects e.g. Mallory Young and ANU's code for ambient noise seismology.
MSNoise provides easy installation and set-up with its clever use of a configuration GUI and a standardised python setup.py installation file. The problem that was identified with MSNoise is two-fold: first is that MSNoise requires the use of a standardised file name and folder formatting structure. Though it has chosen standardised types, the use of this means that all downloaded raw waveform files must be converted to this format before the data can be processed. This usually involves writing another python script and can be costly in terms of time. This can also be frustrating if the user has yet to deal with files structures such as BUDS before. The second problem with MSNoise is its focus. The code was developed with continuous waveform monitoring of volcanic sources in mind. This is fine for near real-time monitoring of volcanic hazards, but is very difficult to adapt to other ambient noise methods.
{text:soft-page-break}
The European Whisper project has made strides in increasing the speed and accuracy of ambient noise seismological processing. Its focus is largely on broad ambient noise seismic monitoring internationally through the use of ambient noise methods. However, there were some issues when choosing to use Whisper. The first is access. This project is not strictly open-source, and therefore it took an email to the original developers and a wait-time of several weeks before the code could be obtained. Installation was also an issue. The time taken for this was extensive. Though the gains for processing speeds were great, but code could not be developed upon further for licensing issues. In theory, Whisper could be the best for ambient noise processing in the future. However, until it becomes open-source and allows developers from outside of their project to ultimately critique and improve upon the code, it will again have the same problem that MSNoise has. This is that its narrowly focused and not ideal for developing new methods in ambient noise seismology.
In the time since start SeisSuite the documentation explaining the use of ANTS has improved, but is still somewhat challenging to a general user that the installation has not been explained to extensively. The scripts are solo and not associated with individuals tools developed for ANTS and the decision to skip this programme was made early on. From second-hand accounts, the views of those who have installed and operated ANTS have had a positive experience, but this was not the case for other users and this prompted the development of SeisSuite.
In addition to the points raised above, SeisSuite also focuses on survey design and optimisation, for which no previous code had been yet identified. All code related to seismological survey design and network station spacing has previously been associated with active seismic sources such as earthquake event seismology, or triggered event (e.g. explosives, or air-gun sourced) exploration seismology (find references associated with survey design for these types of seismology).
SeisSuite is developed with work-flow stream-lining and ease of set-up and installation in mind. While there are currently still bugs in the system of installation, the experience already adheres to the python packaging guidelines as set in (insert reference/s related to streamlining and PyPi packaging guidelines and installation). It is readily available on GitHub at github.com/boland1992/seissuite, and available to be forked, cloned and have branches edited in a collaborated Open-Source coding environment. SNT and ANTS follow the same version controlled guidelines, however, they are not packaged for easy installation and set-up like {text:soft-page-break} SeisSuite is, and this can save a considerable amount of time when it comes to wanting to do an ambient noise survey. PyPi is the pip python packaging project and is an index of developed python modules. SeisSuite modular streamlined file structure, allows for just the separate and most widely used functions and classes to be utilised without the need to download the entire github project.
Below are instructions of how to set-up SeisSuite both with and without GitHub.
Download the SeisSuite project from the github repository by way of (ensure that git is installed on your machine first):
$ git clone www.github.com/boland1992/seissuite
The most up-to-date project directory will be cloned into a directory named SeisSuite in your current working directory. Change directory into the seissuite folder and then run the command below, you should see the following folder structure:
$ pwd
~/home/user_name/seissuite
$ tree -d
├── bin
│ ├── configs
│ │ └── tmp
│ ├── shapefiles
│ └── tmp
├── build
│ ├── lib
├── dist
├── docs
├── seissuite
│ ├── ant
│ ├── azimuth
│ ├── database
│ ├── gui
│ ├── misc
│ ├── response
│ ├── sort_later
│ │ ├── instrument_database
│ │ ├── nonlinloc
│ │ │ └── control_files
│ │ └── waveloc
│ ├── spacing
│ ├── spectrum
│ └── trigger
└── test
The folder structure of the programme works as follows:
Binaries or scripts are placed in the bin/ directory.
The build/ directory holds insert information about the build folder here
{text:soft-page-break} 3. The dist/ directory is automated by python's dist-utils module and contains previous versions of the programme for installation if desired.
The docs/ directory contains documentation, including examples and how-tos with explanations of how the software works. This directory is, always and always will be, a work in progress.
the seissuite/ directory contains all module and library file .py scripts that are most commonly used by the scripts in the bin/ directory.
Though not well populated at the time of writing this, the test/ directory has been created in the hopes that this programme will eventually be extensively unit-tested in order to increase the validity, and decrease the number of total errors that are being seen by the end-users.
Once you have confirmed that the folder structure is in order, run the installation programme through the python dist-utils module file setup.py:
$ python setup.py install
This command will install the seissuite module into the default python site-packages directory of the user and these can then be called independent of where the calling scripts are located. If any modules are playing up and will not work as desired, a simple re-run of python setup.py install after changing one of the module scripts contained in seissuite/seissuite/ e.g. ant, misc etc.
That is it! Once all of those operations have been run, you are now ready to use SeisSuite.
PyPi (pip) Installation:
If you have pip installed on your machine already, then simply run:
$ pip install seissuite
This will install everything in the seissuite module file from the folder structure outlined above. For example seissuite.ant, seissuite.misc and the others.
If you do not yet have pip installed, please do so before running the above command.
Simplification of use and work-flow is ideal for performing an ambient noise survey quickly and efficiently. SeisSuite aims to do this through the use of a folders structure set-up script named: (name is subject to change) and with the use of configuration files located in the bin/configs directory.
Firstly, take place any and all of the configurations you wish to be processed in the configs folder. They can be batch processed and more than one can be run by the 02_timeseries_process.py main programme script at a time. If more than one configuration file is in the configs folder at the time of running, they will be processed in alphabetical order. An example configuration file can be found in the appendix and contains explanations of variables and settings that can be used with SeisSuite.
The 00_setup.py script should be run after at least one configuration file has been set-up in the configs directory. This script creates the correct output folder structure in the output directory you have chosen in the configuration file. Again this helps with knowing exactly where all outputs will be headed once the programme has been run, which speeds up the time of development without having to specify many different directories.
SeisSuite Output folder structure:
├── INPUT
│ ├── DATA
│ ├── DATABASES
│ ├── DATALESS
│ └── XML
└── OUTPUT
├── CROSS
├── DEPTH
├── FTAN
└── TOMO
Smart data selection and SQL reference databases.
Data in the form of raw or pre-merged trace or pre-bandpass filtered continuous seismograms are supported. So far only miniSEED waveform data and metadata in the form of Dataless SEED or Station XML are supported. This is subject to change.
SeisSuite has the added benefit of smart data selection over its counterparts. Once a new data set and metadata have been placed into the INPUT/DATA and either INPUT/XML or INPUT/DATALESS folders respectively, then the programme initiates a reference SQL database. The database has many tables, and a full explanation can be found in the docs/ directory of seissuite. Each miniSEED file's headers are opened and file location and file trace start and end times are stored in the database. This means that instead of having to worry about folder structure, file-length or any other changes to the input data format e.g. BUDS or SDS structures are unnecessary. This saves considerable time in not having to write individual scripts to automate this process for every new dataset that comes in. All that has to be ensured is that the data is in miniSEED format.
Once run, the 02_timeseries_process.py script checks to see if a given data set configuration already has a a reference database. If not, these are initialised and then the cross-correlation programme may search through this very quickly using the SQL protocol to find what data it must process at what time. It also has the benefit of already knowing that there is data there that can be processed and this minimises false openings and errors associated with low data completeness in the running cross-correlations.
If the database is large, for example: circa 1TB, then database initialisation may take some time. Please be patient in this instance.
There is little evidence that the spacings for many new seismic networks are focused on producing results from ambient noise methods. The primary focus of seismic networks vary. Some examples are: active reflection seismic surveys and continuous monitoring for earthquake and micro-seismic events. Producing results from ambient noise surveys, is not yet a primary focus for any known seismic network installations as of the date of this study. While this is not likely to change, it is a good idea to explore all avenues of potential outputs when building a new seismic network. In this regard, results from ambient noise should certainly be taken into account.
SeisSuite aims to assist in statistically optimising and constraining new seismic network station site locations for use with ambient noise methods.
Point density distributions are used as a proxy to find the optimised station spacing to give the highest possible resolution for a given set number of stations in a confined targeted survey field area. A point density distribution is found by taking the great-circle line between each station pair in the network. Then filters associated with known maximum attenuation distances are input by the user. 1000km is the default input here. Then points are placed with a set number per kilometre are placed along the great-circle line. Once a grid with a certain box size is set by the user, then a point density per box can be estimated. Boxes within the grid that have a higher point density are used to constrain ambient noise tomographic inversion images, therefore they can be used alone to proxy velocity map resolutions and even network station spacing distributions.
Figure 2 shows the point density distribution of the current networks available on the Incorporated Research Institutes for Seismomlogy's (IRIS) database. This distribution of stations have been declustered to have a representative station for every 0.1 degree or circa 10km minimum spacing in order to avoid small scale network clusters affecting the overall continental scale distribution. SeisSuite has the functionality to statistically find a set of N new stations to add to this network by finding the centres of areas of lowest point density and running multiple iterations until the user is satisfied with the new point density distribution.
Figure 3 shows results performed with SeisSuite for designing an optimised station spacing with ambient noise methods for the Australian continent. The result is a station spacing that has 144 evenly distributed seismic stations, and from this comes a much more evenly spaced network for use with increasing the resolution of tomographic velocity images, or even potentially detecting seismic events across the field space.
Figure 1: Station pair great-circle path density point distribution of seismic stations available on IRIS for the Australian continent. These include stations sourced from the Australian Nation Seismic Network (AU) and the Australian Seismometers in Schools Network (S). The number of boxes or “bins” is a 200x200 grid.
Figure 2: Point density distribution from statistically evenly spaced seismic stations. N is the number of stations and was taken to be 144 in this case. This is the number of currently operated stations in Australia on IRIS for the combined the AU
Ultimately, the future for this optimised spacing would also include GIS location data such as: locations of roads, streams, power lines and other sources of potentially signal-dampening noise. When these, and other local noise source constraints are taken into account, then the choices for new station sites would decrease, allowing for fewer difficult station site location decisions to be made by the user.
In summary, SeisSuite has the following important additions of note:
Statistically optimised ambient noise seismic survey station spacing.
GitHub and pip quick installation and version control with a proper modular pythonic structure.
Simplified configuration file settings and additional pre-processing options (turn them on and off at start-up).
SQL reference database to enable generalised data calling regardless of file length and input folder structure.
Full options to enable the choice for either serialisation or parallelisation of: trace merging, pre-processing, cross-correlation, stacking and FTAN dispersion curve analysis steps. This can allow python to utilise all of the processing power on a giving workstation. MPI support for cluster computing is currently under development for even faster ambient noise survey processing.
Support for statistically optimising the station spacing for both: adding to existing seismic networks, and creating a new network from scratch.
There are still additions to this programme to be developed. In the end, the final product should be able to optimise where to place the additional stations to existing seismic networks, as well as provide ideal site locations for new networks. This will be achieved through a combination of the code that has already been developed, and many more additional constraints to these locations. So far, the code can only process ideal station site based on area geometry and point-density spacings (see section etc) , and does not account for many real-world constraints to seismic stations, and in particular their relationship to ambient noise methods. A few examples of these real-world constraints are: the distance of a station to their respective cross-correlation pair station, the station's proximity to high-amplitude noise sources that are not ideal for ambient noise studies, such as power-lines, high-use roads, trains, and then the ease of installation for any given new station. These and other factors that can be added all should decrease the number of ideal sites to place any new seismic station to collect data for ambient noise seismology.
Acknowledgements
References
{text:soft-page-break}
Appendix
Example SeisSuite configuration file:
###############################################################################
#
# This is an example of configuration file, wherein global paths and parameters
# related to the seismic noise tomography procedure are defined. At least one
# configuration file should reside in the folder in which you intend to run
# your scripts. The configuration file(s) can have any name, as long as the
# extension is 'cnf': e.g., 'tomo.cnf', 'myconfig.cnf', 'test.cnf'...
#
# The parameters are divided in several sections:
# - [processing] :
# - [paths] : default paths to input/output folders
# - [maps] : parameters to plot maps
# - [cross-correlation] : parameters to calculate cross-correlations
# - [FTAN] : parameters of the frequency-time analysis
# - [tomography] : parameters of the tomographic inversion
#
# Before using the scripts and package pysismo, you should at least make sure
# that the paths in section [paths] and shapefiles in section [maps] are
# consistent with your own files and folders organization. And, of course, you
# should make sure to set the correct interval of dates to calculate the cross-
# correlations, in section [cross-correlation]. The other parameters can be
# fine-tuned later as you analyze your data.
#
# Module pysismo.psconfig takes care of reading the configuration file and
# defining the global parameters. If only one configuration file (*.cnf) is
# found in the current folder, then psconfig reads it silently. If
# several *.cnf files are found, then you'll be prompted to select one of
# them.
#
# Other modules then import from psconfig the parameters they need, e.g.:
#
# from psconfig import CROSSCORR_DIR, FTAN_DIR, PERIOD_BANDS, ...
#
# Note that most of (but not all) the global parameters are actually default
# values that can be overridden in the functions where they are used. For
# example, ``PERIOD_BANDS`` is the default value of the input argument ``bands``
# of the function pscrosscorr.CrossCorrelation.plot_by_period_band(), but you
# can specify other bands by explicitly passing the argument, e.g.:
#
# plot_by_period_band(bands=[[10, 30], [20, 50]])
#
# If, in one script, you want to modify a global parameter without touching
{text:soft-page-break} # the configuration file, you must first import psconfig, then modify the
# parameters as desired, and finally import other module(s), e.g.:
#
# >>> from pysismo import psconfig
# >>> psconfig.FTAN_DIR = 'mydir'
# >>> from pysismo import pscrosscorr
# >>> pscrosscorr.FTAN_DIR
# 'mydir' # ok the changes have been taken in account
#
# It is strongly discouraged to modify global parameters once psconfig (or a
# module importing it) has been imported, as the effect can be highly
# imprevisible (immutable default values won't be affected, mutable default
# values can be affected, parameters used as is in the code will be affected).
#
###############################################################################
#======
[automate]
#======
# If ANY of the AUTOMATE varibales in the ./configs/ folder are set to True, then the application will loop over all of them.
# Otherwise, if all of them are set to False you can only choose one. Option to choose more than one, but not all is coming soon!
AUTOMATE = True
#======
[paths]
#======
#Set one folder path. This folder path will be used by setup.py to create input folders (DATA, DATALESS and XML). Set the default TIMLINE_DB, and RESPONSE_DB for these SQL databases to be saved
#and output folders (CROSS, FTAN, TOMO and DEPTH). if folder path is set to False then
# if folder reads DEFAULT the current working directory of the application will be chosen.
# Ensure that the folder path does NOT have / at the end
#FOLDER = DEFAULT
FOLDER = /storage/MASTERS/CONFIGURATIONS/CONTINENTAL_AUSTRALIA
#set the folder where the computer programs in seismology have been installed if required.
# example CPIS dir:
COMPUTER_PROGRAMS_IN_SEISMOLOGY_DIR = /home/seis_suite/CPIS/PROGRAMS.330/bin
#======
[processing]
#======
#set the individual preprocessing techniques that you want performed on your analysis. Each
# must be set either True or False to work. Any other options with give an error
# Take note that setting RESP_TOL, RESP_EFFECT and RESP_RANGE = [1,40] to large values, it will result in very few stations being chosen for processing.
MAX_DISTANCE = 2000.0 ; set max distance before signal is likel to be fully attenuated!
DECLUSTER = True
DECLUSTER_MIN = 10.0 ; set minimum decluster distance (km)
{text:soft-page-break}
TDD = True ; trim, demean and detrend the trace (True recommended)
RESP_REMOVE = True ; remove instrument response? (True recommended)
EVENT_REMOVE = False ; automate event removal from catalogue (dist to event, window and avg V) uses IRIS and others and only removes regional events inside your networks range! (automated)
HIGHAMP_REMOVE = False ; automate removal of high amplitude noise from entire database (recommended to perform this in collaboration with EVENT_REMOVE)
# in the future do I want to set effectiveness limit for RESP_CHECK
RESP_CHECK = False ; perform an instrument response check on data and only use data with tolerance overlap
RESP_RANGE = [1,40] ; period range for current project. Default is 1-40s period for ambient noise methods.
RESP_TOL = 0.50 ; minimum overlap between instrument response 95% effectiveness frequency range and RESP_RANGE
RESP_EFFECT = 0.80 ; default is 80% effectiveness frequency range. So the point at which the instrument response gain is above 95% of the maximum gain for that given instrument!
BANDPASS = True ; apply a band-pass filter across the set spectrum
DOWNSAMPLE = True ; if set to False, data is downsampled to max of min sample rate in database
COMPLETENESS = True ; do initial completeness check, do not process the series below tolerance
TIME_NOMALISATION = True ; perform time normalisation OR one-bit normalisation, OR input own!
SPEC_WHITENING = True ; perform spectral whitening of data
#=====
[maps]
#=====
# paths to shapefiles (coasts, tectonic provinces and labels), used
# to plot maps, if no COAST_SHP is provided, the tomographic maps will not plot correctly!
#
# - ``COAST_SHP`` should be a shapefile containing lines or polygons
# representing coasts (you can also include borders).
#
# - ``TECTO_SHP`` should be a shapefile containing polygons representing
# tectonic provinces, AND AN ATTRIBUTE TABLE whose first field
# contains the province's category, which will be used to affect
# a color to the polygon (see below).
#
# - ``TECTO_LABELS`` should be a shapefile containing points representing
# the location of the labels associated with the tectonic provinces,
# AND AN ATTRIBUTE TABLE whose first field contains the label (characters
# '\' will be replaced with line breaks), and the second field contains
# the label's angle.
#
# You can set any of these files to ``null``.
COAST_SHP = ./shapefiles/aus.shp
TECTO_SHP = False
TECTO_LABELS = null
# JSON dict giving the color of the tectonic provinces according to their
# category (first field of the attribute table of ``TECTO_SHP``, see above).
# A category not appearing in this dict will be filled with white.
{text:soft-page-break} # A color can be any object understood by matplotlib: a string (e.g., "green"),
# a grey shade (e.g., "0.5"), an html hex string (e.g., "#eeefff"),
# a R/G/B tuple (e.g., [0.5, 0.5, 0.5]) or a R/B/G/alpha tuple (e.g.,
# [0.5, 0.5, 0.5, 0.5]).
TECTO_COLORS = {
"Archean": [1, 0.757, 0.757],
"Phanerozoic": [1, 1, 0.878],
"Neoproterozoic": "0.863"
}
# bounding box of (large) global maps and (small) inset maps
# (min lon, max lon, min lat, max lat in JSON lists)
# this is the bounding box for the stations that will be processed for tomo map
BBOX_LARGE = [110, 155, -45, -10]
BBOX_SMALL = [110, 155, -45, -10]
#==================
[cross-correlation]
#==================
# Comment out crosscorr_tmax if you want the function in psconfig.py to choose one for you. Else put the number of seconds for your time window!
# Comment out the FIRSTDAY and LASTDAY variables in order to have it determined automatically from the SQL databases (must run the create_database.py script for this to work!)
# test the first 3 months of 2014 for cross-correlation!
XCORR_INTERVAL = 45.0 ; number of minutes in xcorr time series
USE_DATALESSPAZ = True ; use dataless files to remove instrument response?
USE_STATIONXML = False ; use stationXML files to remove instrument response?
FIRSTDAY = 01/01/2014 ; first day of cross-correlation (d/m/y)
LASTDAY = 31/03/2014 ; last day of cross-correlation (d/m/y)
MINFILL = 0.80 ; minimum data fill within xcorr interval, default at 80%
CROSSCORR_STATIONS_SUBSET = null ; subset of stations (null if all, else JSON list)
CROSSCORR_SKIPLOCS = ["50"] ; locations to skip (JSON list)
PERIODMIN = 2.0 ; bandpass filtering minimum period
PERIODMAX = 25.0 ; bandpass filtering maximum period
CORNERS = 2 ; number of corners?
ZEROPHASE = True ; zerophase?
PERIOD_RESAMPLE = 0.5 ; resample period to decimate traces, after band-pass
ONEBIT_NORM = False ; one-bit normalization?
PERIODMIN_EARTHQUAKE = 25.0 ; estimated earthquake period minimum for filtering
PERIODMAX_EARTHQUAKE = 75.0 ; estimated earthquake period maximum for filtering
WINDOW_FREQ = 0.0002 ; freq window (Hz) to smooth ampl spectrum
CROSSCORR_TMAX = 300 ; number of seconds for xcorr window both positive and negative
PLOT_CLASSIC = False ; Plot all xcorr station pairs. Take note: this takes a LONG time for large networks!
PLOT_DISTANCE = True ; Plot only time-series offset of all xcorr pairs with distance
#=====
{text:soft-page-break} [FTAN]
#=====
# default period bands (JSON list), used to:
# - plot cross-correlation by period bands, in plot_FTAN(), plot_by_period_bands()
# - plot spectral SNR, in plot_spectral_SNR()
# - estimate min spectral SNR, in FTANs()
PERIOD_BANDS = [[4, 7], [7, 15], [10, 22], [15, 30], [20, 35]]
# (these bands focus on periods ~5, 10, 15, 20, 25 seconds)
# default parameters to define the signal and noise windows used to
# estimate the SNR:
# - the signal window is defined according to a min and a max velocity as:
# dist/vmax < t < dist/vmin
# - the noise window has a fixed size and starts after a fixed trailing
# time from the end of the signal window
SIGNAL_WINDOW_VMIN = 2.0
SIGNAL_WINDOW_VMAX = 4.0
SIGNAL2NOISE_TRAIL = 500.0
NOISE_WINDOW_SIZE = 500.0
# periods and velocities of the FTAN: start, stop and step (JSON lists)
RAWFTAN_PERIODS_STARTSTOPSTEP = [3.0, 45.1, 1.0]
CLEANFTAN_PERIODS_STARTSTOPSTEP = [3.0, 45.1, 1.0]
FTAN_VELOCITIES_STARTSTOPSTEP = [2.0, 5.51, 0.01]
# default width parameter of the narrow Gaussian bandpass filters
# applied in the FTAN. The bandpass filters take the form:
#
# exp[-FTAN_ALPHA * ((f-f0)/f0)**2],
#
# where f is the frequency and f0 the center frequency of the filter.
FTAN_ALPHA = 20
# relative strength of the smoothing term in the penalty function that
# the dispersion curve seeks to minimize
STRENGTH_SMOOTHING = 1.0
# replace nominal frequency (i.e., center frequency of Gaussian filters)
# with instantaneous frequency (i.e., dphi/dt(t=arrival time) with phi the
# phase of the filtered analytic signal), in the FTAN and dispersion curves?
# See Bensen et al. (2007) for technical details.
USE_INSTANTANEOUS_FREQ = True
# if the instantaneous frequency (or period) is used, we need to discard bad
# values from instantaneous periods. So:
# - instantaneous periods whose relative difference with respect to
# nominal period is greater than ``MAX_RELDIFF_INST_NOMINAL_PERIOD``
# are discarded,
# - instantaneous periods lower than ``MIN_INST_PERIOD`` are discarded,
# - instantaneous periods whose relative difference with respect to the
# running median is greater than ``MAX_RELDIFF_INST_MEDIAN_PERIOD`` are
# discarded; the running median is calculated over
# ``HALFWINDOW_MEDIAN_PERIOD`` points to the right and to the left
# of each period.
MAX_RELDIFF_INST_NOMINAL_PERIOD = 0.8
{text:soft-page-break} MIN_INST_PERIOD = 1.5
HALFWINDOW_MEDIAN_PERIOD = 3
MAX_RELDIFF_INST_MEDIAN_PERIOD = 0.5
# ==========
[tomography]
# ==========
# Default parameters related to the velocity selection criteria
MINSPECTSNR = 4 ; min spectral SNR to retain velocity
MINSPECTSNR_NOSDEV = 4 ; min spectral SNR to retain velocity if no std dev
MAXSDEV = 0.1 ; max sdt dev (km/s) to retain velocity
MINNBTRIMESTER = 4 ; min nb of trimesters to estimate std dev
MAXPERIOD_FACTOR = 0.08333 ; max period = *MAXPERIOD_FACTOR* * pair distance
# Default internode spacing of grid
LONSTEP = 0.1
LATSTEP = 0.1
# Default correlation length of the smoothing kernel:
# S(r,r') = exp[-|r-r'|**2 / (2 * correlation_length**2)]
CORRELATION_LENGTH = 150
# Default strength of the spatial smoothing term (alpha) and the
# weighted norm penalization term (beta) in the penalty function
ALPHA = 400
BETA = 200
# Default parameter in the damping factor of the norm penalization term,
# such that the norm is weighted by exp(- lambda_*path_density)
# With a value of 0.15, penalization becomes strong when path density < ~20
# With a value of 0.30, penalization becomes strong when path density < ~10
LAMBDA = 0.3
In [ ]: