Matched Filtering with PyCBC Inspiral

Duncan A. Brown1,2

1Department of Physics, Syracuse University, Syracuse, NY 13244, USA

2DARK, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen, Denmark

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.

Introduction

This notebook demonstrates using the pycbc_inspiral program to find the triggers from the LIGO Hanford and Livingston observatories corresponding to GW150914 event. This notebook uses the publically available frame data from the LIGO Open Science Center (LOSC), input files from the PyCBC Configuration GitHub repository, and code from the PyCBC GitHub repository.

This notebook can be run from the PyCBC Docker container, or a machine with PyCBC installed. Instructions for downloading the docker container are available from the PyCBC home page. To start a container with instance of Jupyter notebook, run the commands

docker pull pycbc/pycbc-el7:v1.7.5
docker run -p 8888:8888 --name pycbc_notebook -it pycbc/pycbc-el7:v1.7.5 /bin/bash -l

Once the container has started, this notebook can be downloaded with the command:

curl -L https://raw.githubusercontent.com/ligo-cbc/pycbc/master/examples/gw150914/PyCBCInspiral.ipynb > PyCBCInspiral.ipynb

The notebook server can be started inside the container with the command:

jupyter notebook --ip 0.0.0.0 --no-browser

You can then connect to the notebook at the URL printed by jupyter.

There are several (arbitrary) choices that can be made when constructing a matched filter for binary mergers, including the overall normalization of the signal-to-noise ratio, the reference time in the waveform for which the search reports the arrival time, and a scale factor to keep numerical quantites close to unity to prevent round-off error. Understanding these choices is essential for interpreting the reported time, phase, and amplitude needed to construct the best-fit waveform for a signal.

pycbc_inspiral used the FindChirp algorithm for matched filtering. For details of the choices made by the algorithm used here, see chapter 4 of Duncan Brown's PhD thesis and the FindChirp paper in Phys Rev D.

Setup notebook and download input data

The following code sets up plotting and imports modules that we will use later in the notebook.


In [1]:
% matplotlib inline
% config InlineBackend.figure_format = 'retina'

from matplotlib import rcParams
rcParams['figure.figsize']=(14,5)

import numpy as np
import matplotlib.pyplot as plt

import pycbc.types
import pycbc.fft
import pycbc.io.hdf
import pycbc.waveform

from pycbc import DYN_RANGE_FAC



Download the input frame data from LOSC

The pycbc_inspiral code reads strain data from Frame format files. We download the calibrated frame data from the LIGO Open Science Center.


In [2]:
!if [ ! -f H-H1_LOSC_16_V2-1126257414-4096.gwf ] ; then curl https://losc.ligo.org/s/events/GW150914/H-H1_LOSC_16_V2-1126257414-4096.gwf > H-H1_LOSC_16_V2-1126257414-4096.gwf ; fi
!if [ ! -f L-L1_LOSC_16_V2-1126257414-4096.gwf ] ; then curl https://losc.ligo.org/s/events/GW150914/L-L1_LOSC_16_V2-1126257414-4096.gwf > L-L1_LOSC_16_V2-1126257414-4096.gwf ; fi

Download the veto files

These files are used to window out times when there are large delta function glitches in the input strain data that are identified by the Omicron burst search. This proceedure for identifying and removing these times from the search is described in the paper GW150914: First results from the search for binary black hole coalescence with Advanced LIGO.


In [3]:
!if [ ! -f H1-gating_C01_SNR300-1126051217-1129383017.txt.gz ] ; then curl -L https://raw.githubusercontent.com/ligo-cbc/pycbc-config/1e9aee13ebf85e916136afc4a9ae57f5b2d5bc64/O1/dq/H1-gating_C01_SNR300-1126051217-1129383017.txt.gz >  H1-gating_C01_SNR300-1126051217-1129383017.txt.gz ; fi
!if [ ! -f L1-gating_C01_SNR300-1126051217-1129383017.txt.gz ] ; then curl -L https://raw.githubusercontent.com/ligo-cbc/pycbc-config/1e9aee13ebf85e916136afc4a9ae57f5b2d5bc64/O1/dq/L1-gating_C01_SNR300-1126051217-1129383017.txt.gz >  L1-gating_C01_SNR300-1126051217-1129383017.txt.gz ; fi

Download a template bank containing the best-matching template for GW150914

A bank of template parameters is needed as input to pycbc_inspiral. The code will search over each template in the bank for a matching signal in the input strain data. The full template bank used in the GW150914 search contains approximately 250,000 templates. Since we are only going to look at GW150914, we download a template bank containing only the template that best matches GW150915. This is the template indicated by a circle in Figure 1 of the paper GW150914: First results from the search for binary black hole coalescence with Advanced LIGO.


In [4]:
!if [ ! -f H1L1-GW150914_BANK-1126051217-3331800.xml ] ; then curl -L https://raw.githubusercontent.com/ligo-cbc/pycbc-config/master/O1/bank/H1L1-GW150914_BANK-1126051217-3331800.xml > H1L1-GW150914_BANK-1126051217-3331800.xml ;  fi

In order to use the template bank with the PyCBC trigger processing tools, we convert the XML format bank to HDF5.


In [5]:
!rm -f H1L1-GW150914_BANK-1126051217-3331800.hdf
!pycbc_coinc_bank2hdf --verbose --bank-file H1L1-GW150914_BANK-1126051217-3331800.xml --output H1L1-GW150914_BANK-1126051217-3331800.hdf


2017-08-10 09:13:04,245 Loading H1L1-GW150914_BANK-1126051217-3331800.xml
2017-08-10 09:13:04,342 Getting template hashes
2017-08-10 09:13:04,343 Writing to H1L1-GW150914_BANK-1126051217-3331800.hdf

Setup pycbc_inspiral

The following cells are based on downloading pycbc_inspiral from the v1.7.5 release of PyCBC. The production GW150914 search used the v1.3.2 release of PyCBC. PyCBC's Travis CI build and test suite ensures that the results of the code have not changed since this v1.3.2 release, when the code is run with the same parameters used to report the detection of GW150914 in the discovery PRL.

Code setup

pycbc_inspiral is a stand-alone program that is designed to be run from the command line. To run it within an IPython notebook, we first download it with the %load command. We then modify the code to place the main program in the function

def findchirp(subtract_waveform=None):
    # main pycbc_inspiral code goes here

    return [snr, norm, corr, idx, snrv], bank, segments, gwstrain

This allows us to run the code with the findchirp() function in this notebook and access the internal data structures of the code.

The argument subtract_waveform allows us to pass a pycbc.types.TimeSeries that is subtracted from the strain data prior to filtering. To do this, we add the lines

if subtract_waveform:
        logging.info('Removing waveform from input strain data')
        gwstrain -= subtract_waveform

after reading in the gwstrain using strain.from_cli().

Other than these modifications, the code is identical to the production filtering code.


In [6]:
# %load https://raw.githubusercontent.com/ligo-cbc/pycbc/v1.7.5/bin/pycbc_inspiral
#!/usr/bin/env python

# Copyright (C) 2014 Alex Nitz
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

import sys
import logging, argparse, numpy, itertools
import pycbc
import pycbc.version
from pycbc import vetoes, psd, waveform, strain, scheme, fft, DYN_RANGE_FAC, events
from pycbc.vetoes.sgchisq import SingleDetSGChisq
from pycbc.filter import MatchedFilterControl, make_frequency_series, qtransform
from pycbc.types import TimeSeries, FrequencySeries, zeros, float32, complex64
import pycbc.fft.fftw, pycbc.version
import pycbc.opt
import pycbc.weave
import pycbc.inject
import time

last_progress_update = -1.0

def update_progress(p,u,n):
    """ updates a file 'progress.txt' with a value 0 .. 1.0 when enough (filtering) progress was made
    """
    global last_progress_update
    if p > last_progress_update + u:
        f = open(n,"w")
        if f:
            f.write("%.4f" % p)
            f.close()
        last_progress_update = p

def findchirp(subtract_waveform=None):
    global last_progress_update
    
    tstart = time.time()

    parser = argparse.ArgumentParser(usage='',
        description="Find single detector gravitational-wave triggers.")

    parser.add_argument('--version', action=pycbc.version.Version)
    parser.add_argument("-V", "--verbose", action="store_true",
                      help="print extra debugging information", default=False )
    parser.add_argument("--update-progress",
                      help="updates a file 'progress.txt' with a value 0 .. 1.0 when this amount of (filtering) progress was made",
                      type=float, default=0)
    parser.add_argument("--update-progress-file",
                      help="name of the file to write the amount of (filtering) progress to",
                      type=str, default="progress.txt")
    parser.add_argument("--output", type=str, help="FIXME: ADD")
    parser.add_argument("--bank-file", type=str, help="FIXME: ADD")
    parser.add_argument("--snr-threshold",
                      help="SNR threshold for trigger generation", type=float)
    parser.add_argument("--newsnr-threshold", type=float, metavar='THRESHOLD',
                        help="Cut triggers with NewSNR less than THRESHOLD")
    parser.add_argument("--low-frequency-cutoff", type=float,
                      help="The low frequency cutoff to use for filtering (Hz)")
    parser.add_argument("--enable-bank-start-frequency", action='store_true',
                      help="Read the starting frequency of template waveforms"
                           " from the template bank.")
    parser.add_argument("--max-template-length", type=float,
                      help="The maximum length of a template is seconds. The "
                           "starting frequency of the template is modified to "
                           "ensure the proper length")
    parser.add_argument("--enable-q-transform", action='store_true',
                      help="compute the q-transform for each segment of a "
                           "given analysis run. (default = False)")
    # add approximant arg
    pycbc.waveform.bank.add_approximant_arg(parser)
    parser.add_argument("--order", type=int,
                      help="The integer half-PN order at which to generate"
                           " the approximant. Default is -1 which indicates to use"
                           " approximant defined default.", default=-1,
                           choices = numpy.arange(-1, 9, 1))
    taper_choices = ["start","end","startend"]
    parser.add_argument("--taper-template", choices=taper_choices,
                        help="For time-domain approximants, taper the start and/or"
                        " end of the waveform before FFTing.")
    parser.add_argument("--cluster-method", choices=["template", "window"],
                        help="FIXME: ADD")
    parser.add_argument("--cluster-function", choices=["findchirp", "symmetric"],
                        help="How to cluster together triggers within a window. "
                        "'findchirp' uses a forward sliding window; 'symmetric' "
                        "will compare each window to the one before and after, keeping "
                        "only a local maximum.", default="findchirp")
    parser.add_argument("--cluster-window", type=float, default = -1,
                        help="Length of clustering window in seconds."
                        " Set to 0 to disable clustering.")
    parser.add_argument("--maximization-interval", type=float, default=0,
                        help="Maximize triggers over the template bank (ms)")
    parser.add_argument("--bank-veto-bank-file", type=str, help="FIXME: ADD")
    parser.add_argument("--chisq-snr-threshold", type=float,
                        help="Minimum SNR to calculate the power chisq")
    parser.add_argument("--chisq-bins", default=0, help=
                        "Number of frequency bins to use for power chisq. Specify"
                        " an integer for a constant number of bins, or a function "
                        "of template attributes.  Math functions are "
                        "allowed, ex. "
                        "'10./math.sqrt((params.mass1+params.mass2)/100.)'. "
                        "Non-integer values will be rounded down.")
    parser.add_argument("--chisq-threshold", type=float, default=0,
                        help="FIXME: ADD")
    parser.add_argument("--chisq-delta", type=float, default=0, help="FIXME: ADD")
    parser.add_argument("--autochi-number-points", type=int, default=0,
                        help="The number of points to use, in both directions if"
                             "doing a two-sided auto-chisq, to calculate the"
                             "auto-chisq statistic.")
    parser.add_argument("--autochi-stride", type=int, default=0,
                        help="The gap, in sample points, between the points at"
                             "which to calculate auto-chisq.")
    parser.add_argument("--autochi-two-phase", action="store_true",
                        default=False,
                        help="If given auto-chisq will be calculated by testing "
                             "against both phases of the SNR time-series. "
                             "If not given, only the phase matching the trigger "
                             "will be used.")
    parser.add_argument("--autochi-onesided", action='store', default=None,
                        choices=['left','right'],
                        help="Decide whether to calculate auto-chisq using"
                             "points on both sides of the trigger or only on one"
                             "side. If not given points on both sides will be"
                             "used. If given, with either 'left' or 'right',"
                             "only points on that side (right = forward in time,"
                             "left = back in time) will be used.")
    parser.add_argument("--autochi-reverse-template", action="store_true",
                        default=False,
                        help="If given, time-reverse the template before"
                             "calculating the auto-chisq statistic. This will"
                             "come at additional computational cost as the SNR"
                             "time-series will need recomputing for the time-"
                             "reversed template.")
    parser.add_argument("--autochi-max-valued", action="store_true",
                        default=False,
                        help="If given, store only the maximum value of the auto-"
                             "chisq over all points tested. A disadvantage of this "
                             "is that the mean value will not be known "
                             "analytically.")
    parser.add_argument("--autochi-max-valued-dof", action="store", metavar="INT",
                        type=int,
                        help="If using --autochi-max-valued this value denotes "
                             "the pre-calculated mean value that will be stored "
                             "as the auto-chisq degrees-of-freedom value.")
    parser.add_argument("--downsample-factor", type=int,
                        help="Factor that determines the interval between the "
                             "initial SNR sampling. If not set (or 1) no sparse sample "
                             "is created, and the standard full SNR is calculated.", default=1)
    parser.add_argument("--upsample-threshold", type=float,
                        help="The fraction of the SNR threshold to check the sparse SNR sample.")
    parser.add_argument("--upsample-method", choices=["pruned_fft"],
                        help="The method to find the SNR points between the sparse SNR sample.",
                        default='pruned_fft')
    parser.add_argument("--user-tag", type=str, metavar="TAG", help="""
                        This is used to identify FULL_DATA jobs for
                        compatibility with pipedown post-processing.
                        Option will be removed when no longer needed.""")
    parser.add_argument("--keep-loudest-interval", type=float,
                        help="Window in seconds to maximize triggers over bank")
    parser.add_argument("--keep-loudest-num", type=int,
                        help="Number of triggers to keep from each maximization interval")
    parser.add_argument("--gpu-callback-method", default='none')
    parser.add_argument("--use-compressed-waveforms", action="store_true", default=False,
                        help='Use compressed waveforms from the bank file.')
    parser.add_argument("--waveform-decompression-method", action='store', default=None,
                        help='Method to be used decompress waveforms from the bank file.')

    # Add options groups
    psd.insert_psd_option_group(parser)
    strain.insert_strain_option_group(parser)
    strain.StrainSegments.insert_segment_option_group(parser)
    scheme.insert_processing_option_group(parser)
    fft.insert_fft_option_group(parser)
    pycbc.opt.insert_optimization_option_group(parser)
    pycbc.weave.insert_weave_option_group(parser)
    pycbc.inject.insert_injfilterrejector_option_group(parser)
    SingleDetSGChisq.insert_option_group(parser)
    opt = parser.parse_args()

    # Check that the values returned for the options make sense
    psd.verify_psd_options(opt, parser)
    strain.verify_strain_options(opt, parser)
    strain.StrainSegments.verify_segment_options(opt, parser)
    scheme.verify_processing_options(opt, parser)
    fft.verify_fft_options(opt,parser)
    pycbc.opt.verify_optimization_options(opt, parser)
    pycbc.weave.verify_weave_options(opt, parser)

    pycbc.init_logging(opt.verbose)

    inj_filter_rejector = pycbc.inject.InjFilterRejector.from_cli(opt)
    ctx = scheme.from_cli(opt)
    gwstrain = strain.from_cli(opt, dyn_range_fac=DYN_RANGE_FAC,
                               inj_filter_rejector=inj_filter_rejector)
    
    # Add code to subtract a waveform from the input time series
    if subtract_waveform:
        logging.info('Removing waveform from input strain data')
        gwstrain -= subtract_waveform
    
    strain_segments = strain.StrainSegments.from_cli(opt, gwstrain)


    with ctx:
        fft.from_cli(opt)

        flow = opt.low_frequency_cutoff
        flen = strain_segments.freq_len
        tlen = strain_segments.time_len
        delta_f = strain_segments.delta_f


        logging.info("Making frequency-domain data segments")
        segments = strain_segments.fourier_segments()
        psd.associate_psds_to_segments(opt, segments, gwstrain, flen, delta_f,
                      flow, dyn_range_factor=DYN_RANGE_FAC, precision='single')

        # storage for values and types to be passed to event manager
        out_types = {
            'time_index'     : int,
            'snr'            : complex64,
            'chisq'          : float32,
            'chisq_dof'      : int,
            'bank_chisq'     : float32,
            'bank_chisq_dof' : int,
            'cont_chisq'     : float32,
                    }
        out_types.update(SingleDetSGChisq.returns)
        out_vals = {key: None for key in out_types}
        names = sorted(out_vals.keys())

        if len(strain_segments.segment_slices) == 0:
            logging.info("--filter-inj-only specified and no injections in analysis time")
            event_mgr = events.EventManager(
                  opt, names, [out_types[n] for n in names], psd=None,
                  gating_info=gwstrain.gating_info)
            event_mgr.finalize_template_events()
            event_mgr.write_events(opt.output)
            logging.info("Finished")
            sys.exit(0)

        if opt.enable_q_transform:
            logging.info("Performing q-transform on analysis segments")
            q_trans = qtransform.inspiral_qtransform_generator(segments)

        else:
            q_trans = {}

        # FIXME: Maybe we should use the PSD corresponding to each trigger
        event_mgr = events.EventManager(
                opt, names, [out_types[n] for n in names], psd=segments[0].psd,
                gating_info=gwstrain.gating_info, q_trans=q_trans)

        template_mem = zeros(tlen, dtype = complex64)
        cluster_window = int(opt.cluster_window * gwstrain.sample_rate)

        if opt.cluster_window == 0.0:
            use_cluster = False
        else:
            use_cluster = True

        if hasattr(ctx, "num_threads"):
                ncores = ctx.num_threads
        else:
                ncores = 1


        matched_filter = MatchedFilterControl(opt.low_frequency_cutoff, None,
                                       opt.snr_threshold, tlen, delta_f, complex64,
                                       segments, template_mem, use_cluster,
                                       downsample_factor=opt.downsample_factor,
                                       upsample_threshold=opt.upsample_threshold,
                                       upsample_method=opt.upsample_method,
                                       gpu_callback_method=opt.gpu_callback_method,
                                       cluster_function=opt.cluster_function)

        bank_chisq = vetoes.SingleDetBankVeto(opt.bank_veto_bank_file,
                                              flen, delta_f, flow, complex64,
                                              phase_order=opt.order,
                                              approximant=opt.approximant)

        power_chisq = vetoes.SingleDetPowerChisq(opt.chisq_bins, opt.chisq_snr_threshold)

        autochisq = vetoes.SingleDetAutoChisq(opt.autochi_stride,
                                     opt.autochi_number_points,
                                     onesided=opt.autochi_onesided,
                                     twophase=opt.autochi_two_phase,
                                     reverse_template=opt.autochi_reverse_template,
                                     take_maximum_value=opt.autochi_max_valued,
                                     maximal_value_dof=opt.autochi_max_valued_dof)

        logging.info("Overwhitening frequency-domain data segments")
        for seg in segments:
            seg /= seg.psd

        logging.info("Read in template bank")
        bank = waveform.FilterBank(opt.bank_file, flen, delta_f,
            low_frequency_cutoff=None if opt.enable_bank_start_frequency else flow,
            dtype=complex64, phase_order=opt.order,
            taper=opt.taper_template, approximant=opt.approximant,
            out=template_mem, max_template_length=opt.max_template_length,
            enable_compressed_waveforms=True if opt.use_compressed_waveforms else False,
            waveform_decompression_method=
            opt.waveform_decompression_method if opt.use_compressed_waveforms else None)

        sg_chisq = SingleDetSGChisq.from_cli(opt, bank, opt.chisq_bins)

        ntemplates = len(bank)
        nfilters = 0

        logging.info("Full template bank size: %s", ntemplates)
        bank.template_thinning(inj_filter_rejector)
        if not len(bank) == ntemplates:
            logging.info("Template bank size after thinning: %s", len(bank))

        tsetup = time.time() - tstart

        # Note: in the class-based approach used now, 'template' is not explicitly used
        # within the loop.  Rather, the iteration simply fills the memory specifed in
        # the 'template_mem' argument to MatchedFilterControl with the next template
        # from the bank.
        for t_num in xrange(len(bank)):
            tmplt_generated = False

            for s_num, stilde in enumerate(segments):
                # Filter check checks the 'inj_filter_rejector' options to
                # determine whether
                # to filter this template/segment if injections are present.
                if not inj_filter_rejector.template_segment_checker(
                        bank, t_num, stilde, opt.gps_start_time):
                    continue
                if not tmplt_generated:
                    template = bank[t_num]
                    event_mgr.new_template(tmplt=template.params,
                        sigmasq=template.sigmasq(segments[0].psd))
                    tmplt_generated = True

                if opt.cluster_method == "window":
                    cluster_window = int(opt.cluster_window * gwstrain.sample_rate)
                if opt.cluster_method == "template":
                    cluster_window = \
                        int(template.chirp_length * gwstrain.sample_rate)

                if opt.update_progress:
                    update_progress((t_num + (s_num / float(len(segments))) ) / len(bank),
                                    opt.update_progress, opt.update_progress_file)
                logging.info("Filtering template %d/%d segment %d/%d" %
                             (t_num + 1, len(bank), s_num + 1, len(segments)))

                nfilters = nfilters + 1
                snr, norm, corr, idx, snrv = \
                   matched_filter.matched_filter_and_cluster(s_num,
                                                             template.sigmasq(stilde.psd),
                                                             cluster_window,
                                                             epoch=stilde._epoch)

                if not len(idx):
                    continue

                out_vals['bank_chisq'], out_vals['bank_chisq_dof'] = \
                      bank_chisq.values(template, stilde.psd, stilde, snrv, norm,
                                        idx+stilde.analyze.start)

                out_vals['chisq'], out_vals['chisq_dof'] = \
                      power_chisq.values(corr, snrv, norm, stilde.psd,
                                         idx+stilde.analyze.start, template)

                out_vals['sg_chisq'] = sg_chisq.values(stilde, template, stilde.psd,
                                              snrv, norm,
                                              out_vals['chisq'],
                                              out_vals['chisq_dof'],
                                              idx+stilde.analyze.start)

                out_vals['cont_chisq'] = \
                      autochisq.values(snr, idx+stilde.analyze.start, template,
                                       stilde.psd, norm, stilde=stilde,
                                       low_frequency_cutoff=flow)

                idx += stilde.cumulative_index

                out_vals['time_index'] = idx
                out_vals['snr'] = snrv * norm

                event_mgr.add_template_events(names, [out_vals[n] for n in names])

            event_mgr.cluster_template_events("time_index", "snr", cluster_window)
            event_mgr.finalize_template_events()

    logging.info("Found %s triggers" % str(len(event_mgr.events)))

    if opt.chisq_threshold and opt.chisq_bins:
        logging.info("Removing triggers with poor chisq")
        event_mgr.chisq_threshold(opt.chisq_threshold, opt.chisq_bins,
                                  opt.chisq_delta)
        logging.info("%d remaining triggers" % len(event_mgr.events))

    if opt.newsnr_threshold and opt.chisq_bins:
        logging.info("Removing triggers with NewSNR below threshold")
        event_mgr.newsnr_threshold(opt.newsnr_threshold)
        logging.info("%d remaining triggers" % len(event_mgr.events))

    if opt.keep_loudest_interval:
        logging.info("Removing triggers that are not within the top %s loudest"
                     " of a %s second interval" % (opt.keep_loudest_num,
                                                   opt.keep_loudest_interval))
        event_mgr.keep_loudest_in_interval(opt.keep_loudest_interval * opt.sample_rate,
                                           opt.keep_loudest_num)
        logging.info("%d remaining triggers" % len(event_mgr.events))

    if opt.injection_window and hasattr(gwstrain, 'injections'):
        logging.info("Keeping triggers within %s seconds of injection" % opt.injection_window)
        event_mgr.keep_near_injection(opt.injection_window, gwstrain.injections)
        logging.info("%d remaining triggers" % len(event_mgr.events))

    if opt.maximization_interval:
        logging.info("Maximizing triggers over %s ms window" % opt.maximization_interval)
        window = int(opt.maximization_interval * gwstrain.sample_rate / 1000)
        event_mgr.maximize_over_bank("time_index", "snr", window)
        logging.info("%d remaining triggers" % len(event_mgr.events))

    tstop = time.time()
    run_time = tstop - tstart
    event_mgr.save_performance(ncores, nfilters, ntemplates, run_time, tsetup)

    logging.info("Writing out triggers")
    event_mgr.write_events(opt.output)

    if opt.fftw_output_float_wisdom_file:
        fft.fftw.export_single_wisdom_to_filename(opt.fftw_output_float_wisdom_file)

    if opt.fftw_output_double_wisdom_file:
        fft.fftw.export_double_wisdom_to_filename(opt.fftw_output_double_wisdom_file)

    logging.info("Finished")
    
    return [snr * 1, norm, corr * 1, idx, snrv], bank, segments, gwstrain

Running the code

To run the code, we first set up the command line arguments for pycbc_inspiral before calling the findchirp() function. The command line arguments below are taken from the production analysis for the GW150914 result. We first run the code for the LIGO Hanford detector, and then for the LIGO Livingston detector.

The only difference between the command line arguments and the production arguments used in the discovery PRL are:

  1. The template bank used is contains only the best-matching template for GW150914.
  2. The --trig-start-time and --trig-end-time arguments that determine how much data will be searched for triggers are set to a 64 second interval centered the GPS time of GW150914 (1126259462) to avoid unncessary computation.
  3. We use the SEONBRv2 time-domain template, rather than the SEOBNRv2_DoubleSpinROM frequency-domain template, as the using a time-domain waveform makes this notebook slightly simpler.
  4. We start the filtering at 25 Hz, rather than 30 Hz to generate a slightly longer template for better visualization.

Set up command line arguments common to both detectors

The command line arguments below are common to running the code on both the Hanford and Livingston detector data.


In [7]:
search_f_low = 25.0
argv = ['pycbc_inspiral', '--verbose',    
'--approximant=SEOBNRv2', '--order=-1', '--taper-template=start',
'--bank-file=H1L1-GW150914_BANK-1126051217-3331800.xml',
'--pad-data=8', '--sample-rate=4096', '--strain-high-pass=20',
'--segment-length=256', '--segment-start-pad=112', '--segment-end-pad=16',     
'--psd-estimation=median', '--psd-segment-stride=8', '--psd-segment-length=16', '--psd-inverse-length=16',
'--low-frequency-cutoff={}'.format(search_f_low), 
'--trig-start-time=1126259430', '--trig-end-time=1126259494',
"--chisq-bins=0.4*get_freq('fSEOBNRv2Peak',params.mass1,params.mass2,params.spin1z,params.spin2z)**(2./3.)",
'--snr-threshold=5.5', '--newsnr-threshold=5',
'--cluster-method=window', '--cluster-window=4', '--cluster-function=symmetric',
'--injection-window=4.5', '--filter-inj-only',       
'--processing-scheme=cpu']

Run the code for the Hanford detector

We add the additional command line arguments that are specific to the Hanford detector and run the code.


In [8]:
argv_h1_data = ['--frame-files=H-H1_LOSC_16_V2-1126257414-4096.gwf', '--channel-name=H1:LOSC-STRAIN',
                '--gps-start-time=1126257636', '--gps-end-time=1126259684',
                '--gating-file=H1-gating_C01_SNR300-1126051217-1129383017.txt.gz'
               ]

In [9]:
argv_h1_out = ['--output=H1-INSPIRAL_FULL_DATA_JOB0-1126257771-1837.hdf']

In [10]:
argv_h1 = argv + argv_h1_data + argv_h1_out
sys.argv = argv_h1

In [11]:
!rm -f H1-INSPIRAL_FULL_DATA_JOB0-1126257771-1837.hdf
h1_result = findchirp()


2017-08-10 09:13:08,051 Running with CPU support: 1 threads
2017-08-10 09:13:08,054 Reading Frames
2017-08-10 09:13:54,610 Highpass Filtering
2017-08-10 09:13:57,118 Converting to float32
2017-08-10 09:13:57,374 Gating glitches
2017-08-10 09:13:57,405 Resampling data
2017-08-10 09:14:09,882 Highpass Filtering
2017-08-10 09:14:10,994 Remove Padding
2017-08-10 09:14:10,999 Making frequency-domain data segments
2017-08-10 09:14:12,025 Overwhitening frequency-domain data segments
2017-08-10 09:14:12,031 Read in template bank
2017-08-10 09:14:12,206 Full template bank size: 1
2017-08-10 09:14:12,210 0: generating SEOBNRv2 from 25.0 Hz
2017-08-10 09:14:13,116 Filtering template 1/1 segment 1/1
2017-08-10 09:14:13,178 2 points above threshold
2017-08-10 09:14:13,179 ...Doing power chisq
2017-08-10 09:14:13,181 ...Calculating power chisq bins
2017-08-10 09:14:13,191 doing fast point chisq
2017-08-10 09:14:13,204 Found 2 triggers
2017-08-10 09:14:13,206 Removing triggers with NewSNR below threshold
2017-08-10 09:14:13,207 1 remaining triggers
2017-08-10 09:14:13,209 Writing out triggers
2017-08-10 09:14:13,227 Finished

Run the code for the Livingston detector

We add the additional command line arguments that are specific to the Hanford detector and run the code.


In [12]:
argv_l1_data = ['--frame-files=L-L1_LOSC_16_V2-1126257414-4096.gwf', '--channel-name=L1:LOSC-STRAIN',
                '--gps-start-time=1126258108', '--gps-end-time=1126260156',
                '--gating-file=L1-gating_C01_SNR300-1126051217-1129383017.txt.gz'
               ]

In [13]:
argv_l1_out = ['--output=L1-INSPIRAL_FULL_DATA_JOB0-1126258302-1591.hdf']

In [14]:
argv_l1 = argv + argv_l1_data + argv_l1_out
sys.argv = argv_l1

In [15]:
!rm -f L1-INSPIRAL_FULL_DATA_JOB0-1126258302-1591.hdf
l1_result = findchirp()


2017-08-10 09:14:14,226 Running with CPU support: 1 threads
2017-08-10 09:14:14,227 Reading Frames
2017-08-10 09:15:19,208 Highpass Filtering
2017-08-10 09:15:23,316 Converting to float32
2017-08-10 09:15:23,886 Gating glitches
2017-08-10 09:15:23,899 Resampling data
2017-08-10 09:15:32,590 Highpass Filtering
2017-08-10 09:15:33,087 Remove Padding
2017-08-10 09:15:33,089 Making frequency-domain data segments
2017-08-10 09:15:33,818 Overwhitening frequency-domain data segments
2017-08-10 09:15:33,824 Read in template bank
2017-08-10 09:15:33,981 Full template bank size: 1
2017-08-10 09:15:33,984 0: generating SEOBNRv2 from 25.0 Hz
2017-08-10 09:15:34,594 Filtering template 1/1 segment 1/1
2017-08-10 09:15:34,633 1 points above threshold
2017-08-10 09:15:34,635 ...Doing power chisq
2017-08-10 09:15:34,637 ...Calculating power chisq bins
2017-08-10 09:15:34,647 doing fast point chisq
2017-08-10 09:15:34,654 Found 1 triggers
2017-08-10 09:15:34,655 Removing triggers with NewSNR below threshold
2017-08-10 09:15:34,657 1 remaining triggers
2017-08-10 09:15:34,659 Writing out triggers
2017-08-10 09:15:34,680 Finished

Process the results

Read triggers from the output files

The pycbc_inspiral code writes its output to the HDF5 files specified by the --output argument. We read in these files and find the triggers with the largest signal-to-noise ratio, as these triggers correspond to GW150914.


In [16]:
h1_triggers = pycbc.io.hdf.SingleDetTriggers(
    'H1-INSPIRAL_FULL_DATA_JOB0-1126257771-1837.hdf',
    'H1L1-GW150914_BANK-1126051217-3331800.hdf',
    None, None, None, 'H1'
)

l1_triggers = pycbc.io.hdf.SingleDetTriggers(
    'L1-INSPIRAL_FULL_DATA_JOB0-1126258302-1591.hdf',
    'H1L1-GW150914_BANK-1126051217-3331800.hdf',
    None, None, None, 'L1'
)

imax = np.argmax(h1_triggers.snr)
t_h1 = h1_triggers.end_time[imax]
t_idx_h1 = h1_result[0][3][np.argmax(np.abs(h1_result[0][4]))]
snr_h1 = h1_triggers.snr[imax]
sigmasq = h1_triggers.get_column('sigmasq')[0]
eff_distance_h1 = (sigmasq)**0.5 / snr_h1

imax = np.argmax(l1_triggers.snr)
t_l1 = l1_triggers.end_time[imax]
t_idx_l1 = l1_result[0][3][np.argmax(np.abs(l1_result[0][4]))]
snr_l1 = l1_triggers.snr[imax]
sigmasq = l1_triggers.get_column('sigmasq')[0]
eff_distance_l1 = (sigmasq)**0.5 / snr_l1


2017-08-10 09:15:34,731 Loading triggers
2017-08-10 09:15:34,764 Loading bank
2017-08-10 09:15:34,791 Loading triggers
2017-08-10 09:15:34,800 Loading bank

We print the arrival time, signal-to-noise ratio, and time delay for the H1 and L1 triggers. We also print the effective distance measured by the search code, as this is needed to scale the amplitude of the best-fit template.

pycbc_inspiral records the sample index of the trigger, relative to the start of the strain time series that is read in. We save this arrival time offset as it will be used later to set the arrival time of the best-fit template.


In [17]:
print "H1 Time of event is {} with signal-to-noise ratio {}".format(t_h1, snr_h1)
print "L1 Time of event is {} with signal-to-noise ratio {}".format(t_l1, snr_l1)
print "HL arrival time difference = {} ms".format(t_h1-t_l1)
print "H1 Effective distance = {} Mpc, L1 effective distance = {} Mpc".format(eff_distance_h1, eff_distance_l1)
print "H1 arrival time offset = {}, L1 arrival time offset = {}".format(t_idx_h1,t_idx_l1)


H1 Time of event is 1126259462.42 with signal-to-noise ratio 19.8119182587
L1 Time of event is 1126259462.42 with signal-to-noise ratio 13.1280155182
HL arrival time difference = 0.007080078125 ms
H1 Effective distance = 956.212235338 Mpc, L1 effective distance = 1203.53080916 Mpc
H1 arrival time offset = 7481025, L1 arrival time offset = 5547684

Plot the SNR time series for the two detectors

Now we plot the signal-to-noise ratio time series for each detector. The filtering function used by pycbc_inspiral generates the complex signal-to-noise ratio time series. In our code, this is returned in the array h1_result[0][0]. The complex signal-to-noise ratio is scaled by a normalization factor stored in h1_result[0][1]. We plot the absolute value of the complex signal-to-noise, relative to GPS time 1126259462. For comparison, we draw vertical lines at the time of the GW150914 trigger. It can be seen that the signal-to-noise ratio time series peaks at the time of the trigger, as expected. We also store the array index of this peak, as we will use this to compute the phase of the best-fit template.


In [18]:
h1_complex_snr = h1_result[0][0] * h1_result[0][1]
h1_complex_snr.start_time -= 1126259462
h1_complex_snr = h1_complex_snr.time_slice(0.3, 0.5)
h1_snr_data = abs(h1_complex_snr)
h1_imax = h1_snr_data.abs_arg_max()

l1_complex_snr = l1_result[0][0] * l1_result[0][1]
l1_complex_snr.start_time -= 1126259462
l1_complex_snr = l1_complex_snr.time_slice(0.3, 0.5)
l1_snr_data = abs(l1_complex_snr)
l1_imax = l1_snr_data.abs_arg_max()

plt.plot(h1_snr_data.sample_times, h1_snr_data, color='r', label="H1")
plt.axvline(t_h1 - 1126259462, color='r', linestyle=':')

plt.plot(l1_snr_data.sample_times, l1_snr_data, color='g', label="L1")
plt.axvline(t_l1 - 1126259462, color='g', linestyle=':')

plt.legend(loc='best')
plt.xlim([0.3,0.5])
plt.ylim([0,20])
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Signal-to-noise ratio")


Out[18]:
<matplotlib.text.Text at 0x60d4990>

Compute the phase of the best fit template

We use the peak index and the complex signal-to-noise ratio to compute the coalescence phase of the best-fit template.


In [19]:
phase_h1 = np.angle(h1_complex_snr[h1_imax])
phase_l1 = np.angle(l1_complex_snr[l1_imax])
print "H1 Coalescence phase = {} radians, L1 Coalescence phase = {} radians".format(phase_h1, phase_l1)


H1 Coalescence phase = 2.00336503983 radians, L1 Coalescence phase = -0.933851242065 radians

Plot the template used in the bank

For visual comparison, we plot the template used in the search compared to an SEOBNRv2 waveform directly generated with the template parameters. pycbc_inspiral stores templates in the frequency domain, so we convert the template to the time domain for plotting with the command:

template = h1_bank[0].to_timeseries()

The waveforms are computed for a signal at a distance of 1 Mpc. pycbc_inspiral applies an internal scaling of approximately 6e20 to scale these quanties up to order unity (the exact scaling is stored in the constant pycbc.DYN_RANGE_FAC). We divide the amplitude of the template by DYN_RANGE_FAC so that it has the correct amplitude.

The FindChirp algorithm expects the peak time of the template to be at t = 0 in the time domain so that the signal-to-noise ratio time series is time-stamped reference to the peak (coalescence) time of the waveform. Since the FFT is periodic, this means that the signal prior to the merger is at the end of the time-domain template and the merger-ringdown waveform is at the start of the time-domain template. We use the roll function to shift the sample points of the template so that it is contiguous and can be plotted against the result of calling pycbc.waveform.get_td_waveform() to generate the SEOBNRv2 waveform.

Notice that the only difference between the template waveform and the reference SEOBNRv2 waveform is that the search template is windowed at the start to prevent Gibbs ringing when it is transformed into the frequency domain.


In [20]:
h1_bank = h1_result[1]
l1_bank = l1_result[1]

In [21]:
hp, hc = pycbc.waveform.get_td_waveform(
    approximant="SEOBNRv2",
    mass1=h1_bank.table['mass1'][0],
    mass2=h1_bank.table['mass2'][0],
    spin1z=h1_bank.table['spin1z'][0],
    spin2z=h1_bank.table['spin2z'][0],
    f_lower=search_f_low,
    delta_t=h1_bank.delta_t)

template = h1_bank[0].to_timeseries() / DYN_RANGE_FAC
template.roll(int(-template.start_time / template.delta_t))

plt.plot(template.sample_times, template,
         color='r',label='Search template')
plt.plot(hp.sample_times, hp, linestyle=':', label='SEOBNRv2')

np.savetxt('SearchTemplate_hp.txt', np.c_[template.sample_times.numpy(), 
                                          template.numpy()])

plt.xlim([-0.3,0.05])
plt.title('Search template at 1 Mpc')
plt.legend(loc='best')
plt.ylabel('Strain'.format(DYN_RANGE_FAC))
plt.xlabel('Time from waveform peak (seconds)')


2017-08-10 09:15:36,795 0: generating SEOBNRv2 from 25.0 Hz
Out[21]:
<matplotlib.text.Text at 0xab053d0>

Generate the best fit template

To generate the best-fit template, we must take the template above and apply the approriate time, phase, and amplitude shifts reported by the search code.

Create the time-domain template

We apply the correct phase shift and amplitude scaling measured by the search code to the frequency-domain template and then inverse Fourier transform the search template into the time domain.


In [22]:
h1_best_fit_waveform = (h1_bank[0] * np.exp(1.0j * phase_h1)).to_timeseries()
h1_best_fit_waveform /= eff_distance_h1
l1_best_fit_waveform = (l1_bank[0] * np.exp(1.0j * phase_l1)).to_timeseries()
l1_best_fit_waveform /= eff_distance_l1


2017-08-10 09:16:05,952 0: generating SEOBNRv2 from 25.0 Hz
2017-08-10 09:16:07,001 0: generating SEOBNRv2 from 25.0 Hz

Place the template in the time series

Using the measured arrival index we place the template at the correct location in the time series created in the previous cell.

When this cell is evaluated, the time series h1_remove and l1_remove will contain the correctly time, phase, and amplitude shifted waveform.


In [23]:
h1_strain = h1_result[3]
l1_strain = l1_result[3]

h1_remove = h1_best_fit_waveform * 1
h1_remove.roll(-500)
h1_remove.resize(len(h1_strain))
h1_remove.roll(t_idx_h1 + 500 - len(h1_best_fit_waveform))
h1_remove.start_time = h1_strain.start_time

l1_remove = l1_best_fit_waveform * 1
l1_remove.roll(-500)
l1_remove.resize(len(l1_strain))
l1_remove.roll(t_idx_l1 + 500 - len(l1_best_fit_waveform))
l1_remove.start_time = l1_strain.start_time

Compare the best-fit waveform to the NR data

We now compare the best-fit template to the raw numerical relavity data shown in Figure 2 of the PRL, and the filtered NR waveform shown Figure 1 of the PRL.

Load NR data from Figure 1

The LOSC NR data is scalled by a factor of 1e21. We undo this scaling when we plot the data for comparison to the best-fit waveform.


In [24]:
!if [ ! -f fig2-unfiltered-waveform-H.txt ] ; then curl -L "https://losc.ligo.org/s/events/GW150914/P150914/fig2-unfiltered-waveform-H.txt" > fig2-unfiltered-waveform-H.txt ; fi
!if [ ! -f fig2-unfiltered-waveform-L.txt ] ; then curl -L "https://losc.ligo.org/s/events/GW150914/P150914/fig2-unfiltered-waveform-L.txt" > fig2-unfiltered-waveform-L.txt ; fi
!if [ ! -f fig1-waveform-H.txt ] ; then curl -L "https://losc.ligo.org/s/events/GW150914/P150914/fig1-waveform-H.txt" > fig1-waveform-H.txt ; fi
!if [ ! -f fig1-waveform-L.txt ] ; then curl -L "https://losc.ligo.org/s/events/GW150914/P150914/fig1-waveform-L.txt" > fig1-waveform-L.txt ; fi

In [25]:
nr_filt_data = np.loadtxt("fig1-waveform-H.txt")
dt = nr_filt_data[1,0] - nr_filt_data[0,0]
h1_nr_filt_hp = pycbc.types.TimeSeries(nr_filt_data[:,1], delta_t=dt, epoch=nr_filt_data[0,0])

nr_data = np.loadtxt("fig2-unfiltered-waveform-H.txt")
dt=nr_data[1,0]-nr_data[0,0]
h1_nr_hp = pycbc.types.TimeSeries(nr_data[:,1], delta_t=dt, epoch=nr_data[0,0])

nr_filt_data = np.loadtxt("fig1-waveform-L.txt")
dt= nr_filt_data[1,0] - nr_filt_data[0,0]
l1_nr_filt_hp = pycbc.types.TimeSeries(nr_filt_data[:,1], delta_t=dt, epoch=nr_filt_data[0,0])

nr_data = np.loadtxt("fig2-unfiltered-waveform-L.txt")
dt = nr_data[1,0] - nr_data[0,0]
l1_nr_hp = pycbc.types.TimeSeries(nr_data[:,1], delta_t=dt, epoch=nr_data[0,0])

Trim the best fit waveform

For faster plotting, we trim the full h1_remove and l1_remove time series to an interval of 2 seconds before and 2 seconds after GPS time 1126259462.


In [26]:
h1_bf = h1_remove.time_slice(1126259462-2, 1126259462+2)
l1_bf = l1_remove.time_slice(1126259462-2, 1126259462+2)

h1_bf.save('H1BestFitTemplate.txt')
l1_bf.save('L1BestFitTemplate.txt')

Compare the best-fit waveform to the Fig 1 data

We plot the best-fit waveform from the search and compare it to the data from Figure 1 of the PRL. The L1 raw NR waveform is shifted by 0.007 s and multiplied by -1, as described in the caption of Figure 1.

Notice that in both of these plots there is a phase shift between the raw NR waveform and the filtered NR waveform. This is beacuse the filters used in Figure 1 of the PRL are not phase preserving.


In [27]:
plt.plot(h1_nr_filt_hp.sample_times, h1_nr_filt_hp*1e-21,
         color='b', label='Fig 1 NR waveform', linestyle=':')
plt.plot(h1_nr_hp.sample_times, h1_nr_hp*1e-21 ,color='g', label='SXS:BBH:0305')
plt.plot(h1_bf.sample_times-1126259462, h1_bf/DYN_RANGE_FAC, color='r', label='Best fit template')

np.savetxt('BestFitTemplateH1_Fig1.txt', np.c_[h1_bf.sample_times-1126259462,h1_bf/DYN_RANGE_FAC])

plt.xlim([0.25, 0.45])
plt.ylim([-1.4e-21, 1.4e-21])
plt.title('LIGO Hanford Detector')
plt.legend(loc='best')
plt.ylabel('Strain $\\times$ {0:.1e}'.format(DYN_RANGE_FAC))
plt.xlabel('Time from 1126259462 (seconds)')


Out[27]:
<matplotlib.text.Text at 0x60b0790>

In [28]:
plt.plot(l1_nr_filt_hp.sample_times, l1_nr_filt_hp*1e-21,
         color='b', label='Fig 1 NR waveform', linestyle=':')
plt.plot(l1_nr_hp.sample_times-0.007, l1_nr_hp*-1e-21, color='g', label='SXS:BBH:0305')
plt.plot(l1_bf.sample_times-1126259462, l1_bf/DYN_RANGE_FAC, color='r', label='Best fit template')

np.savetxt('BestFitTemplateL1_Fig1.txt', np.c_[l1_bf.sample_times-1126259462,l1_bf/DYN_RANGE_FAC])

plt.xlim([0.3,0.45])
plt.ylim([-1.4e-21,1.4e-21])
plt.title('LIGO Livingston Detector')
plt.legend(loc='best')
plt.ylabel('Strain')
plt.xlabel('Time from 1126259462 (seconds)')


Out[28]:
<matplotlib.text.Text at 0xc93a250>

Filter the data with the best-fit waveform subtracted

We now re-run pycbc_inspiral but this time we pass the best-fit waveform to the findchip() function so that it is removed from the data prior to the matched filter.

Since we have removed the GW150914 signal, the search code should not find any triggers. In the case where no triggers are found pycbc_inspiral's filtering function does not return any data. To force the code to return the signal-to-noise ratio time series, we set the SNR and newSNR thresholds to unity before running it.

Hanford detector


In [29]:
sys.argv = [a for a in argv_h1 if not (a.startswith('--snr-threshold') or a.startswith('--newsnr-threshold'))]
sys.argv += ['--snr-threshold=1', '--newsnr-threshold=1']

!rm -f H1-INSPIRAL_FULL_DATA_JOB0-1126257771-1837.hdf
h1_residual_result = findchirp(h1_remove)


2017-08-10 09:16:14,473 Running with CPU support: 1 threads
2017-08-10 09:16:14,475 Reading Frames
2017-08-10 09:17:10,596 Highpass Filtering
2017-08-10 09:17:12,891 Converting to float32
2017-08-10 09:17:13,180 Gating glitches
2017-08-10 09:17:13,205 Resampling data
2017-08-10 09:17:21,820 Highpass Filtering
2017-08-10 09:17:22,419 Remove Padding
2017-08-10 09:17:22,421 Removing waveform from input strain data
2017-08-10 09:17:22,447 Making frequency-domain data segments
2017-08-10 09:17:23,307 Overwhitening frequency-domain data segments
2017-08-10 09:17:23,313 Read in template bank
2017-08-10 09:17:23,504 Full template bank size: 1
2017-08-10 09:17:23,507 0: generating SEOBNRv2 from 25.0 Hz
2017-08-10 09:17:24,335 Filtering template 1/1 segment 1/1
2017-08-10 09:17:24,381 5 points above threshold
2017-08-10 09:17:24,383 ...Doing power chisq
2017-08-10 09:17:24,387 ...Calculating power chisq bins
2017-08-10 09:17:24,400 doing fast point chisq
2017-08-10 09:17:24,415 Found 5 triggers
2017-08-10 09:17:24,418 Removing triggers with NewSNR below threshold
2017-08-10 09:17:24,422 5 remaining triggers
2017-08-10 09:17:24,424 Writing out triggers
2017-08-10 09:17:24,456 Finished

Livingston detector


In [30]:
sys.argv = [a for a in argv_l1 if not (a.startswith('--snr-threshold') or a.startswith('--newsnr-threshold'))]
sys.argv += ['--snr-threshold=1', '--newsnr-threshold=1']

!rm -f L1-INSPIRAL_FULL_DATA_JOB0-1126258302-1591.hdf
l1_residual_result = findchirp(l1_remove)


2017-08-10 09:17:24,826 Running with CPU support: 1 threads
2017-08-10 09:17:24,828 Reading Frames
2017-08-10 09:18:20,001 Highpass Filtering
2017-08-10 09:18:22,147 Converting to float32
2017-08-10 09:18:22,416 Gating glitches
2017-08-10 09:18:22,432 Resampling data
2017-08-10 09:18:31,819 Highpass Filtering
2017-08-10 09:18:32,404 Remove Padding
2017-08-10 09:18:32,406 Removing waveform from input strain data
2017-08-10 09:18:32,439 Making frequency-domain data segments
2017-08-10 09:18:33,267 Overwhitening frequency-domain data segments
2017-08-10 09:18:33,272 Read in template bank
2017-08-10 09:18:33,448 Full template bank size: 1
2017-08-10 09:18:33,450 0: generating SEOBNRv2 from 25.0 Hz
2017-08-10 09:18:34,143 Filtering template 1/1 segment 1/1
2017-08-10 09:18:34,188 6 points above threshold
2017-08-10 09:18:34,190 ...Doing power chisq
2017-08-10 09:18:34,192 ...Calculating power chisq bins
2017-08-10 09:18:34,203 doing fast point chisq
2017-08-10 09:18:34,217 Found 6 triggers
2017-08-10 09:18:34,218 Removing triggers with NewSNR below threshold
2017-08-10 09:18:34,221 6 remaining triggers
2017-08-10 09:18:34,224 Writing out triggers
2017-08-10 09:18:34,248 Finished

Plot the signal-to-noise ratio time series

Now we plot the signal-to-noise ratio time series returned by the search for each detector, with vertial lines indicating the trigger times of GW150914.

Notice that there is no trigger corresponding to GW150914 in the output and the values of the SNR at the time of the GW150914 are consistent with noise.


In [31]:
h1_complex_snr = h1_residual_result[0][0] * h1_residual_result[0][1]
h1_complex_snr.start_time -= 1126259462
h1_complex_snr = h1_complex_snr.time_slice(0.3, 0.5)
h1_snr_data = abs(h1_complex_snr)

l1_complex_snr = l1_residual_result[0][0] * l1_residual_result[0][1]
l1_complex_snr.start_time -= 1126259462
l1_complex_snr = l1_complex_snr.time_slice(0.3, 0.5)
l1_snr_data = abs(l1_complex_snr)

plt.plot(h1_snr_data.sample_times, h1_snr_data, color='r', label="H1")
plt.axvline(t_h1 - 1126259462, color='r', linestyle=':')

plt.plot(l1_snr_data.sample_times, l1_snr_data, color='g', label="L1")
plt.axvline(t_l1 - 1126259462, color='g', linestyle=':')

plt.legend(loc='best')
plt.xlim([0.3,0.5])
plt.ylim([0,4])
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Signal-to-noise ratio")


Out[31]:
<matplotlib.text.Text at 0x1219b090>

Plot the whitened strain data

The matched filter in pycbc_inspiral stores the overwhitened strain data. Data is overwhitened in the frequency domain by dividing by the time-truncated inverse power spectral density of the data, as described in the the FindChirp algorithm. To plot the whitened strain, we multiply the frequency-domain overwhitened data by the square root of the power spectral density used to overwhiten it, then use PyCBC's to_timeseries() function to transform back to the time domain.

Plot the power spectrum used to overwhiten the data

Before plotting the strain data in the time domain, we plot the power spectral density of the data for each detector. The vertical lines show the frequencies that the original GW150914 LOSC tutorial* suggests should be notched out from the data.


In [32]:
plt.loglog(h1_result[2][0].psd.sample_frequencies, h1_result[2][0].psd, color='r')
for l in [14.0, 34.70, 35.30, 35.90, 36.70, 37.30, 40.95, 60.00, 
         120.00, 179.99, 304.99, 331.49, 510.02, 1009.99]:
    plt.axvline(l,color='k',linestyle=':')
plt.xlim([25,2048])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Power spectral density $\\times$ {0:.1e}'.format(DYN_RANGE_FAC*DYN_RANGE_FAC))
plt.title('LIGO Hanford Detector')

np.savetxt('PowerSpectrumH1.txt', np.c_[h1_result[2][0].psd.sample_frequencies,
                                        h1_result[2][0].psd])



In [33]:
plt.loglog(l1_result[2][0].psd.sample_frequencies, l1_result[2][0].psd, color='g')
for l in [14.0, 34.70, 35.30, 35.90, 36.70, 37.30, 40.95, 60.00, 
         120.00, 179.99, 304.99, 331.49, 510.02, 1009.99]:
    plt.axvline(l,color='k',linestyle=':')
plt.xlim([25,2048])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Power spectral density $\\times$ {0:.1e}'.format(DYN_RANGE_FAC*DYN_RANGE_FAC))
plt.title('LIGO Livingston Detector')

np.savetxt('PowerSpectrumL1.txt', np.c_[l1_result[2][0].psd.sample_frequencies,
                                        l1_result[2][0].psd])


Calculate the whitened data

Note: this whitened strain is not used by any downstream product and it is not normally constructed by pycbc_inspiral. It is just used here to plot whitened strain data for visualization of the results of this notebook.


In [34]:
whitened_strain_h1 = (h1_result[2][0] * (h1_result[2][0].psd**0.5)).to_timeseries()
whitened_strain_l1 = (l1_result[2][0] * (l1_result[2][0].psd**0.5)).to_timeseries()

whitened_residual_strain_h1 = (h1_residual_result[2][0] * (h1_residual_result[2][0].psd**0.5)).to_timeseries()
whitened_residual_strain_l1 = (l1_residual_result[2][0] * (l1_residual_result[2][0].psd**0.5)).to_timeseries()

We first plot the data used by the search. We offset the L1 data by the measured time of arrival difference, and multiply the Livingston strain by a factor of -1 to account for the relative orientations of the Hanford and Livingston detetectors. The GW150914 signal is clearly visible in the data. More high-frequency noise is visible here than in Figure 1, as only low-pass filter applied is the anti-aliasing filtered used to downsample the input data. This has a higher frequency cutoff than the Butterworth filter used to make Figure 1.


In [35]:
plt.plot(whitened_strain_h1.sample_times-1126259462, whitened_strain_h1, label='H1', color='r')
plt.plot(whitened_strain_l1.sample_times-1126259462+(t_h1-t_l1), -1.0*whitened_strain_l1, label='L1', color='g')

np.savetxt('WhitenedStrainH1.txt', np.c_[whitened_strain_h1.sample_times.numpy()-1126259462, 
                                         whitened_strain_h1.numpy()])

np.savetxt('WhitenedStrainL1.txt', np.c_[whitened_strain_l1.sample_times.numpy()-1126259462, 
                                         whitened_strain_l1.numpy()])

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Whitened strain")


Out[35]:
<matplotlib.text.Text at 0x12a4aad0>

Plot the data with the waveform subtracted

We now plot the time-series data created by pycbc_inspiral after the best-fit waveform has been subtracted. The GW150914 signal is no longer visible by eye.


In [36]:
plt.plot(whitened_residual_strain_h1.sample_times-1126259462, whitened_residual_strain_h1, label='H1', color='r')
plt.plot(whitened_residual_strain_l1.sample_times-1126259462+(t_h1-t_l1),
        -1.0*whitened_residual_strain_l1, label='L1', color='g')

np.savetxt('WhitenedResidualStrainH1.txt', np.c_[whitened_residual_strain_h1.sample_times.numpy()-1126259462, 
                                         whitened_residual_strain_h1.numpy()])

np.savetxt('WhitenedResidualStrainL1.txt', np.c_[whitened_residual_strain_l1.sample_times.numpy()-1126259462, 
                                         whitened_residual_strain_l1.numpy()])

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Whitened strain")


Out[36]:
<matplotlib.text.Text at 0x12a66310>

Correllations in the residual data

We now look numerically for correllations in the above strain data.

Set up the correllation function

To compute the correllation of the residuals, we use the code written by Creswell et al. This code, when run on the data from the Figure 1 of the PRL, reproduces the green curve in the lower right plot in Figure 7 of Creswell et al.

Here we run the code on the strain data used by the filtering code.


In [37]:
def cross_correllation(h1_res_times, h1_res_strain, l1_res_times, l1_res_strain):
    
    fs = 1./(h1_res_times[1]-h1_res_times[0])

    # Based on suggestion from Hao, changed this from 0.39 to 0.40
    t = 0.40
    w = 0.04

    min_indxt = np.where(abs(h1_res_times-t) == abs(h1_res_times-t).min())[0][0]
    max_indxt = np.where(abs(h1_res_times-(t+w)) == abs(h1_res_times-(t+w)).min())[0][0]

    deltatau = 0.01

    tauind_min = int(-deltatau*fs); tauind_max = int(+deltatau*fs)
    tauind = np.arange(tauind_min,tauind_max)
    tau = tauind/fs

    corr = []

    for i in tauind:
        corr.append(np.corrcoef(h1_res_strain[min_indxt+abs(tauind_min)+i:max_indxt-abs(tauind_max)+i],l1_res_strain[min_indxt+abs(tauind_min):max_indxt-abs(tauind_max)])[0][1])

    corr = np.array(corr)
    imax = np.argmax(abs(corr))

    plt.figure()
    plt.plot(tau * 1000.,corr,'k',label="t="+str(t)+", w="+str(w))
    plt.axvline(1000*tau[imax], color='k')
    plt.xlim(-10,10)
    plt.xlabel(r"$\tau$ [ms]")
    plt.ylabel(r"$C(t,\tau,w)$")
    plt.ylim(-1,1)
    plt.legend(loc='best')
    
    print('max |corr| = {:.2g} at tau = {:.3g} ms'.format(max(abs(corr)), 1000*tau[imax]))

Correllation of the input strain

We first compute the cross correllation of the input strain data. A large peak can be seen at a time offset of 7.08 ms, corresponding to the GW150914 signal.


In [38]:
cross_correllation(whitened_strain_h1.sample_times.numpy()-1126259462, whitened_strain_h1.numpy(),
                     whitened_strain_l1.sample_times.numpy()-1126259462, whitened_strain_l1.numpy())


max |corr| = 0.6 at tau = 7.08 ms

Correllation of the waveform-subtracted strain

Now we compute the cross correllation of the strain data that has had the best-fit waveform subtracted. The peak at 7.08 ms is no longer present and the maximum value of the correllation is now 0.26.


In [39]:
cross_correllation(whitened_residual_strain_h1.sample_times.numpy()-1126259462, whitened_residual_strain_h1.numpy(),
                     whitened_residual_strain_l1.sample_times.numpy()-1126259462, whitened_residual_strain_l1.numpy())


max |corr| = 0.27 at tau = 5.37 ms

Filter whitened residuals

The correlation plots above contain frequncies between 25 Hz and 2048 Hz, so we apply time-domain filters to the residuals to band pass and notch out specific frequencies.

Define functions to perform the filtering


In [40]:
from scipy.signal import butter, freqz, filtfilt

def butter_bandpass(lowcut, highcut, fs, order=4):
    nyq = 0.5 * fs
    low = lowcut / nyq
    high = highcut / nyq
    b, a = butter(order, [low, high], btype='band')
    return b, a


def filter_data(data, coefs):
    for coef in coefs:
        b,a = coef
        data = filtfilt(b, a, data)
    return data

Create a band pass filter

First we test using a Butterworth filter to band pass the whitened data between 35 and 350 Hz.


In [41]:
fs = 1.0/whitened_residual_strain_h1.delta_t

filter_coefs = []
bb, ab = butter_bandpass(35.0, 350.0, fs, order=4)
filter_coefs.append((bb, ab))

In [42]:
filterNumber = 0
for b,a in filter_coefs:
    w,h = freqz(b,a,worN=100000)
    if (filterNumber < 1): # first is bandpass...make thicker
        plt.plot((fs * 0.5/np.pi) * w, abs(h),linewidth=3)
    else:
        plt.plot((fs * 0.5/np.pi) * w, abs(h))
    filterNumber += 1
    
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain')
plt.loglog()
plt.xlim(8,2000)
plt.ylim(1e-3,2)
plt.grid(True)
plt.legend_ = None


Apply the band pass the whitened strain data


In [43]:
bandpassed_whitened_strain_h1 = filter_data(whitened_strain_h1.numpy(), filter_coefs)
bandpassed_whitened_strain_l1 = filter_data(whitened_strain_l1.numpy(), filter_coefs)
bandpassed_whitened_residual_strain_h1 = filter_data(whitened_residual_strain_h1.numpy(), filter_coefs)
bandpassed_whitened_residual_strain_l1 = filter_data(whitened_residual_strain_l1.numpy(), filter_coefs)

Plot band passed whitened strain and compute correlation

The plot below shows the band-passed whitened strain for H1 and L1.


In [44]:
plt.plot(whitened_strain_h1.sample_times-1126259462,bandpassed_whitened_strain_h1,
         label='H1',color='r')
plt.plot(whitened_strain_l1.sample_times-1126259462+(t_h1-t_l1),-1.0*bandpassed_whitened_strain_l1,
         label='L1',color='g')

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Bandpassed Whitened strain")


Out[44]:
<matplotlib.text.Text at 0x1eb00290>

We now compute the correllation for the band-passed whitened strain.


In [45]:
cross_correllation(whitened_strain_h1.sample_times.numpy()-1126259462,bandpassed_whitened_strain_h1,
                     whitened_strain_l1.sample_times.numpy()-1126259462,bandpassed_whitened_strain_l1)


max |corr| = 0.96 at tau = 7.32 ms

Plot the band-passed waveform subtracted strain

The plot below shows the band-passed whitened strain with the best-fit waveform subtracted for H1 and L1.


In [46]:
plt.plot(whitened_strain_h1.sample_times-1126259462,bandpassed_whitened_residual_strain_h1,
         label='H1',color='r')
plt.plot(whitened_strain_l1.sample_times-1126259462+(t_h1-t_l1),-1.0*bandpassed_whitened_residual_strain_l1,
         label='L1',color='g')

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Bandpassed Whitened strain")


Out[46]:
<matplotlib.text.Text at 0x20ef6a10>

We now compute the correllation for the band-passed whitened, waveform subtracted strain.


In [47]:
cross_correllation(whitened_strain_h1.sample_times.numpy()-1126259462,bandpassed_whitened_residual_strain_h1,
                     whitened_strain_l1.sample_times.numpy()-1126259462,bandpassed_whitened_residual_strain_l1)


max |corr| = 0.72 at tau = 7.08 ms

Apply a band pass with notches

Now we apply a bandpass filter with notches for comparison to Figure 1 of the PRL.


In [48]:
from scipy.signal import iirdesign, zpk2tf, freqz

def iir_bandstops(fstops, fs, order=4):
    """ellip notch filter

    fstops is a list of entries of the form [frequency (Hz), df, df2]
    where df is the pass width and df2 is the stop width (narrower
    than the pass width). Use caution if passing more than one freq at a time,
    because the filter response might behave in ways you don't expect.

    """
    nyq = 0.5 * fs

    # Zeros zd, poles pd, and gain kd for the digital filter
    zd = np.array([])
    pd = np.array([])
    kd = 1

    # Notches
    for fstopData in fstops:
        fstop = fstopData[0]
        df = fstopData[1]
        df2 = fstopData[2]
        low = (fstop - df) / nyq
        high = (fstop + df) / nyq
        low2 = (fstop - df2) / nyq
        high2 = (fstop + df2) / nyq
        z, p, k = iirdesign([low,high], [low2,high2], gpass=1, gstop=6,
                            ftype='ellip', output='zpk')
        zd = np.append(zd,z)
        pd = np.append(pd,p)    

    # Set gain to one at 100 Hz...better not notch there
    bPrelim,aPrelim = zpk2tf(zd, pd, 1)
    outFreq, outg0 = freqz(bPrelim, aPrelim, 100/nyq)

    # Return the numerator and denominator of the digital filter
    b,a = zpk2tf(zd,pd,k)
    return b, a

##################################################

# Notches from the analog filter minima
notchesAbsolute = np.array(
    [1.400069436086257468e+01,
     3.469952395163710435e+01, 3.529747106409942603e+01,
     3.589993517579549831e+01, 3.670149543090175825e+01,
     3.729785276981694153e+01, 4.095285995922058930e+01,
     6.000478656292731472e+01, 1.200021163564880027e+02,
     1.799868042163590189e+02, 3.049881434406700009e+02,
     3.314922032415910849e+02, 5.100176305752708572e+02,
     1.009992518164026706e+03])


def get_filt_coefs(fs, lowcut, highcut, no_notches=False):
    """return filter coefficients for timeseries figure

    return is a list of (b,a) coefficient tuples, starting with the
    bandpass, followed by the notches.

    """
    coefs = []

    # bandpass
    bb, ab = butter_bandpass(lowcut, highcut, fs, order=4)
    coefs.append((bb, ab))
    if no_notches:
        return coefs

    for notchf in notchesAbsolute: #use this if not using sectionBas
        bn, an = iir_bandstops(np.array([[notchf,1,0.1]]), fs, order=4)
        coefs.append((bn,an))

    # Manually do a wider filter around 510 Hz etc.
    bn, an = iir_bandstops(np.array([[510,200,20]]), fs, order=4)
    coefs.append((bn, an))

    bn, an = iir_bandstops(np.array([[3.314922032415910849e+02,10,1]]), fs, order=4)
    coefs.append((bn, an))

    return coefs

In [49]:
filter_coefs_notch = get_filt_coefs(fs, 35.0, 350.0)

Plot the band pass filter with notches


In [50]:
filterNumber = 0
for b,a in filter_coefs_notch:
    w,h = freqz(b,a,worN=100000)
    if (filterNumber < 1): # first is bandpass...make thicker
        plt.plot((fs * 0.5/np.pi) * w, abs(h),linewidth=3)
    else:
        plt.plot((fs * 0.5/np.pi) * w, abs(h))
    filterNumber += 1
    
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain')
plt.loglog()
plt.xlim(8,2000)
plt.ylim(1e-3,2)
plt.grid(True)
plt.legend_ = None


Apply the band pass and notches to the whitened strain data


In [51]:
bandpassed_notched_whitened_strain_h1 = filter_data(whitened_strain_h1.numpy(), filter_coefs_notch)
bandpassed_notched_whitened_strain_l1 = filter_data(whitened_strain_l1.numpy(), filter_coefs_notch)
bandpassed_notched_whitened_residual_strain_h1 = filter_data(whitened_residual_strain_h1.numpy(), filter_coefs_notch)
bandpassed_notched_whitened_residual_strain_l1 = filter_data(whitened_residual_strain_l1.numpy(), filter_coefs_notch)

Plot band passed, notched whitened strain and compute correlation

The plot below shows the band-passed and notched whitened strain for H1 and L1.


In [52]:
plt.plot(whitened_strain_h1.sample_times-1126259462,bandpassed_notched_whitened_strain_h1,
         label='H1',color='r')
plt.plot(whitened_strain_l1.sample_times-1126259462+(t_h1-t_l1),-1.0*bandpassed_notched_whitened_strain_l1,
         label='L1',color='g')

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Bandpassed Whitened strain")


Out[52]:
<matplotlib.text.Text at 0x1eae8c50>

We now compute the correllation for the band-passed and notched whitened strain.


In [53]:
cross_correllation(whitened_strain_h1.sample_times.numpy()-1126259462,bandpassed_notched_whitened_strain_h1,
                     whitened_strain_l1.sample_times.numpy()-1126259462,bandpassed_notched_whitened_strain_l1)


max |corr| = 0.96 at tau = 7.32 ms

Plot the band passed, notched waveform subtracted strain

The plot below shows the band-passed and notched whitened strain with the best-fit waveform subtracted for H1 and L1.


In [54]:
plt.plot(whitened_strain_h1.sample_times-1126259462,bandpassed_notched_whitened_residual_strain_h1,
         label='H1',color='r')
plt.plot(whitened_strain_l1.sample_times-1126259462+(t_h1-t_l1),-1.0*bandpassed_notched_whitened_residual_strain_l1,
         label='L1',color='g')

plt.xlim([0.3,0.45])
plt.ylim([-200,200])
plt.legend(loc='best')
plt.xlabel("Time from 1126259462 (seconds)")
plt.ylabel("Bandpassed Whitened strain")


Out[54]:
<matplotlib.text.Text at 0x37a84b90>

We now compute the correllation for the band-passed whitened and notched waveform subtracted strain.


In [55]:
cross_correllation(whitened_strain_h1.sample_times.numpy()-1126259462,bandpassed_notched_whitened_residual_strain_h1,
                     whitened_strain_l1.sample_times.numpy()-1126259462,bandpassed_notched_whitened_residual_strain_l1)


max |corr| = 0.7 at tau = 7.08 ms

Save the residual strain

Finally, we save the residual strain for reference. We multiply the residual strain by the power spectral density to undo the overwhitening and save.


In [56]:
residual_strain_h1 = (h1_residual_result[2][0] * (h1_residual_result[2][0].psd)).to_timeseries()
residual_strain_l1 = (l1_residual_result[2][0] * (l1_residual_result[2][0].psd)).to_timeseries()

np.savetxt('ResidualStrainH1.txt', np.c_[residual_strain_h1.sample_times.numpy()-1126259462, 
                                         residual_strain_h1.numpy()/DYN_RANGE_FAC])

np.savetxt('ResidualStrainL1.txt', np.c_[residual_strain_l1.sample_times.numpy()-1126259462, 
                                         residual_strain_l1.numpy()/DYN_RANGE_FAC])

Acknlowgements

DAB thanks Jeandrew Brink, Jolien Creighton, Will Farr, Ben Farr, Jon Gair, Ian Harry, Daniel Holz, Andrew Jackson, Joey Key, Hao Liu, Andrew Lundgren, Jessica McIver, Pavel Naselsky, Alex Neilsen, and Alexander Nitz for helpful discussions. This work is supported by National Science Foundation award PHY-1707954. DAB thanks the Niels Bohr Institute for its hospitality while part of this work was completed, the Kavli Foundation and the DNRF for supporting the 2017 Kavli Summer Program.

Appendix: Version information


In [57]:
sys.argv = ['pycbc_inspiral', '--version']
try:
    findchirp()
except SystemExit:
    pass


--- PyCBC Version --------------------------
Branch: None
Tag: v1.7.5
Id: c52b16863162a73023f6d3f094e89555938e16d8
Builder: Unknown User <>
Build date: 2017-07-13 17:56:16 +0000
Repository status is CLEAN: All modifications committed

Imported from: /home/pycbc/pycbc-software/lib/python2.7/site-packages/PyCBC-1.7.5-py2.7.egg/pycbc/__init__.pyc

--- LAL Version ----------------------------
Branch: None
Tag: None
Id: 539c8700af92eb6dd00e0e91b9dbaf5bae51f004

Builder: Unknown User <>
Repository status: CLEAN: All modifications committed

Imported from: /home/pycbc/pycbc-software/opt/lalsuite/lib64/python2.7/site-packages/lal/__init__.pyc

Runtime libraries:
	linux-vdso.so.1 =>  (0x00007ffc23ff0000)
	liblalsupport.so.9 => /home/pycbc/pycbc-software/opt/lalsuite/lib/liblalsupport.so.9 (0x00007f572ceb2000)
	liblal.so.13 => /home/pycbc/pycbc-software/opt/lalsuite/lib/liblal.so.13 (0x00007f572cb62000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007f572c945000)
	libz.so.1 => /lib64/libz.so.1 (0x00007f572c72f000)
	libhdf5.so.8 => /lib64/libhdf5.so.8 (0x00007f572c13a000)
	libhdf5_hl.so.8 => /lib64/libhdf5_hl.so.8 (0x00007f572bf06000)
	libfftw3.so.3 => /lib64/libfftw3.so.3 (0x00007f572ba17000)
	libfftw3f.so.3 => /lib64/libfftw3f.so.3 (0x00007f572b4e6000)
	libgsl.so.0 => /lib64/libgsl.so.0 (0x00007f572b0bd000)
	libgslcblas.so.0 => /lib64/libgslcblas.so.0 (0x00007f572ae80000)
	libm.so.6 => /lib64/libm.so.6 (0x00007f572ab7d000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f572a961000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f572a5a0000)
	/lib64/ld-linux-x86-64.so.2 (0x00005562dd29b000)
	libsatlas.so.3 => /usr/lib64/atlas/libsatlas.so.3 (0x00007f5729844000)
	libgfortran.so.3 => /lib64/libgfortran.so.3 (0x00007f5729521000)
	libquadmath.so.0 => /lib64/libquadmath.so.0 (0x00007f57292e4000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f57290ce000)


--- LALSimulation Version-------------------
Branch: None
Tag: None
Id: 539c8700af92eb6dd00e0e91b9dbaf5bae51f004

Builder: Unknown User <>
Repository status: CLEAN: All modifications committed

Imported from: /home/pycbc/pycbc-software/opt/lalsuite/lib64/python2.7/site-packages/lalsimulation/__init__.pyc

Runtime libraries:
	linux-vdso.so.1 =>  (0x00007ffd155c9000)
	liblalsimulation.so.14 => /home/pycbc/pycbc-software/opt/lalsuite/lib/liblalsimulation.so.14 (0x00007f6f898e4000)
	liblalsupport.so.9 => /home/pycbc/pycbc-software/opt/lalsuite/lib/liblalsupport.so.9 (0x00007f6f8954c000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007f6f8932f000)
	libz.so.1 => /lib64/libz.so.1 (0x00007f6f89119000)
	libhdf5.so.8 => /lib64/libhdf5.so.8 (0x00007f6f88b24000)
	libhdf5_hl.so.8 => /lib64/libhdf5_hl.so.8 (0x00007f6f888f0000)
	liblal.so.13 => /home/pycbc/pycbc-software/opt/lalsuite/lib/liblal.so.13 (0x00007f6f885a1000)
	libfftw3.so.3 => /lib64/libfftw3.so.3 (0x00007f6f880b1000)
	libfftw3f.so.3 => /lib64/libfftw3f.so.3 (0x00007f6f87b81000)
	libgsl.so.0 => /lib64/libgsl.so.0 (0x00007f6f87758000)
	libgslcblas.so.0 => /lib64/libgslcblas.so.0 (0x00007f6f8751a000)
	libm.so.6 => /lib64/libm.so.6 (0x00007f6f87218000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f6f86ffc000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f6f86c3a000)
	/lib64/ld-linux-x86-64.so.2 (0x00005588df66c000)
	libsatlas.so.3 => /usr/lib64/atlas/libsatlas.so.3 (0x00007f6f85edf000)
	libgfortran.so.3 => /lib64/libgfortran.so.3 (0x00007f6f85bbb000)
	libquadmath.so.0 => /lib64/libquadmath.so.0 (0x00007f6f8597f000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f6f85769000)