Example-Filter-Pipeline


Example Q3: Managing the Filter Pipeline

This example notebook shows how to use the PipelineManager to modify the signal processing on qubit data.

© Raytheon BBN Technologies 2018

We initialize a slightly more advanced channel library:


In [1]:
from QGL import *

cl = ChannelLibrary(db_resource_name=":memory:")

# Create five qubits and supporting hardware
for i in range(5):
    q1 = cl.new_qubit(f"q{i}")
    cl.new_APS2(f"BBNAPS2-{2*i+1}", address=f"192.168.5.{101+2*i}") 
    cl.new_APS2(f"BBNAPS2-{2*i+2}", address=f"192.168.5.{102+2*i}")
    cl.new_X6(f"X6_{i}", address=0)
    cl.new_source(f"Holz{2*i+1}", "HolzworthHS9000", f"HS9004A-009-{2*i}", power=-30)
    cl.new_source(f"Holz{2*i+2}", "HolzworthHS9000", f"HS9004A-009-{2*i+1}", power=-30) 
    cl.set_control(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+1}"], generator=cl[f"Holz{2*i+1}"])
    cl.set_measure(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+2}"], cl[f"X6_{i}"][1], generator=cl[f"Holz{2*i+2}"])

cl.set_master(cl["BBNAPS2-1"], cl["BBNAPS2-1"].ch("m2"))
cl.commit()


AWG_DIR environment variable not defined. Unless otherwise specified, using temporary directory for AWG sequence file outputs.

Creating the Default Filter Pipeline


In [2]:
from auspex.qubit import *


auspex-WARNING: 2019-04-04 13:38:24,127 ----> You may not have the libusb backend: please install it!
auspex-WARNING: 2019-04-04 13:38:24,298 ----> Could not load channelizer library; falling back to python methods.

The PipelineManager is analogous to the ChannelLibrary insomuchas it provides the user with an interface to programmatically modify the filter pipeline, and to save and load different versions of the pipeline.


In [3]:
pl = PipelineManager()


auspex-INFO: 2019-04-04 13:38:24,641 ----> Could not find an existing pipeline. Please create one.

Pipelines are fairly predictable, and will provide some subset of the functionality of demodulating, integrating, average, and writing to file. Some of these can be done on hardware, some in software. The PipelineManager can guess what the user wants for a particular qubit by inspecting which equipment has been assigned to it using the set_measure command for the ChannelLibrary. For example, this ChannelLibrary has defined X6-1000M cards for readout, and the description of this instrument indicates that the highest level available stream is integrated. Thus, the PipelineManager automatically inserts the remaining averager and writer.


In [ ]:
pl.create_default_pipeline()
pl.show_pipeline()

Sometimes, for debugging purposes, one may wish to add multiple pipelines per qubit. Additional pipelines can be added explicitly by running:


In [ ]:
pl.add_qubit_pipeline("q1", "demodulated")
pl.show_pipeline()


In [6]:
pl.ls()


idYearDateTimeName
02019Apr. 0401:38:24 PMworking

We can print the properties of a single node


In [9]:
pl["q1 integrated"].print()


streamselect (q1) Unlabeled
AttributeValueChanges?
hash_val1148803462
stream_typeintegrated
dsp_channel1
if_freq0.0
kernel_dataBinary Data of length 1024
kernel_bias0.0
threshold0.0
threshold_invertFalse

We can print the properties of individual filters or subgraphs:


In [10]:
pl.print("q1 integrated")


NameAttributeValueUncommitted Changes
write (q1)Unlabeled
hash_val4168520539
filenameoutput.auspex
groupnameq1-main
add_dateFalse
average (q1)Unlabeled
hash_val843952093
axisaverages
streamselect (q1)Unlabeled
hash_val1148803462
stream_typeintegrated
dsp_channel1
if_freq0.0
kernel_dataBinary Data of length 1024
kernel_bias0.0
threshold0.0
threshold_invertFalse

Dictionary access is provided to allow drilling down into the pipelines. One can use the specific label of a filter or simple its type in this access mode:


In [11]:
pl["q1 integrated"]["Average"]["Write"].filename = "new.h5"
pl.print("q1 integrated")


NameAttributeValueUncommitted Changes
write (q1)Unlabeled
hash_val4168520539
filenamenew.h5Yes
groupnameq1-main
add_dateFalse
average (q1)Unlabeled
hash_val843952093
axisaverages
streamselect (q1)Unlabeled
hash_val1148803462
stream_typeintegrated
dsp_channel1
if_freq0.0
kernel_dataBinary Data of length 1024
kernel_bias0.0
threshold0.0
threshold_invertFalse

Here uncommitted changes are shown. This can be rectified in the standard way:


In [12]:
cl.commit()
pl.print("q1 integrated")


NameAttributeValueUncommitted Changes
write (q1)Unlabeled
hash_val4168520539
filenamenew.h5
groupnameq1-main
add_dateFalse
average (q1)Unlabeled
hash_val843952093
axisaverages
streamselect (q1)Unlabeled
hash_val1148803462
stream_typeintegrated
dsp_channel1
if_freq0.0
kernel_dataBinary Data of length 1024
kernel_bias0.0
threshold0.0
threshold_invertFalse

Programmatic Modification of the Pipeline

Some simple convenience functions allow the use to easily specify complex pipeline structures.


In [13]:
pl.commit()
pl.save_as("simple")
pl["q1 demodulated"].clear_pipeline()
pl["q1 demodulated"].stream_type = "raw"
pl.recreate_pipeline()
# pl["q1"]["blub"].show_pipeline()

In [ ]:
pl.show_pipeline()

Note the name change. We refer to the pipeline by the stream type of the first element.


In [ ]:
pl["q1 raw"].show_pipeline()


In [ ]:
pl["q1 raw"].add(Display(label="Raw Plot"))
pl["q1 raw"]["Demodulate"].add(Average(label="Demod Average")).add(Display(label="Demod Plot"))
pl.show_pipeline()

As with the ChannelLibrary we can list save, list, and load versions of the filter pipeline.


In [17]:
pl.session.commit()
pl.save_as("custom")
pl.ls()


idYearDateTimeName
02019Apr. 0401:38:24 PMsimple
12019Apr. 0401:38:24 PMworking
22019Apr. 0401:38:25 PMcustom

In [ ]:
pl.load("simple")
pl.show_pipeline()


In [19]:
pl.ls()


idYearDateTimeName
02019Apr. 0401:38:24 PMsimple
12019Apr. 0401:38:24 PMworking
22019Apr. 0401:38:25 PMcustom

Pipeline examples:

Below are some examples of how more complicated pipelines can be constructed. Defining these as functions allows for quickly changing the structure of the data pipeline depending on the experiment being done. It also improves reproducibility and documents pipeline parameters. For example, to change the pipeline and check its construction,

pl = create_tomo_pipeline(save_rr=True)
pl.show_pipeline()

Hopefully the examples below will show you some of the more advanced things that can be done with the data pipelines in Auspex.


In [ ]:
# a basic pipeline that uses 'raw' data a the beginning of the data processing
def create_standard_pipeline():
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']))
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "raw"
        pl[ql].create_default_pipeline(buffers=False)
        pl[ql].if_freq = qb.measure_chan.autodyne_freq
        pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq
        pl[ql]["Demodulate"]["Integrate"].simple_kernel = True
        pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7
        pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6
        #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int"))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")
    return pl

# if you only want to save data integrated with the single-shot filter 
def create_integrated_pipeline(save_rr=False, plotting=True):
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']))
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "integrated"
        pl[ql].create_default_pipeline(buffers=False)
        pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt"
        if save_rr:
            pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr"))
        if plotting:
            pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
            pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")

    return pl

# create to single-shot fidelity pipelines for two qubits 
def create_fidelity_pipeline():
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']))
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "raw"
        pl[ql].create_default_pipeline(buffers=False)
        pl[ql].if_freq = qb.measure_chan.autodyne_freq
        pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq
        pl[ql].add(FidelityKernel(save_kernel=True, logistic_regression=False, set_threshold=True, label=f"Q{ql[-1]}_SSF"))
        pl[ql]["Demodulate"]["Integrate"].simple_kernel = True
        pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7
        pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6
        #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int"))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")
    return pl

# optionally save the demoded data
def create_RR_pipeline(plot=False, write_demods=False):
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']))
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "raw"
        pl[ql].create_default_pipeline(buffers=False)
        pl[ql].if_freq = qb.measure_chan.autodyne_freq
        pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq
        if write_demods:
            pl[ql]["Demodulate"].add(Write(label="demod-writer", groupname=ql+"-demod"))
        pl[ql]["Demodulate"]["Integrate"].simple_kernel = True
        pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7
        pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6
        pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int"))
        if plot:
            pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
            pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")
    return pl

# save everything... using data buffers instead of writing to file
def create_full_pipeline(buffers=True):
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']), buffers=True)
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "raw"
        pl[ql].create_default_pipeline(buffers=buffers)
        if buffers:
            pl[ql].add(Buffer(label="raw_buffer"))
        else:
            pl[ql].add(Write(label="raw-write", groupname=ql+"-raw"))
        pl[ql].if_freq = qb.measure_chan.autodyne_freq
        pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq
        if buffers:
            pl[ql]["Demodulate"].add(Buffer(label="demod_buffer"))
        else:
            pl[ql]["Demodulate"].add(Write(label="demod_write", groupname=ql+"-demod"))
        pl[ql]["Demodulate"]["Integrate"].simple_kernel = True
        pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7
        pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.6e-6
        if buffers:
            pl[ql]["Demodulate"]["Integrate"].add(Buffer(label="integrator_buffer"))
        else:
            pl[ql]["Demodulate"]["Integrate"].add(Write(label="int_write", groupname=ql+"-integrated"))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
        pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")
    return pl

# A more complicated pipeline with a correlator
# These have to be coded more manually because the correlator needs all the correlated channels specified.
# Note that for tomography you're going to want to save the data variance as well, though this can be calculated 
# after the fact if you save the raw shots (save_rr).
def create_tomo_pipeline(save_rr=False, plotting=True):
    pl = PipelineManager()
    pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']))
    
    for ql in ['q2', 'q3']:
        qb = cl[ql]
        pl[ql].clear_pipeline()
        pl[ql].stream_type = "integrated"
        pl[ql].create_default_pipeline(buffers=False) 
        pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt"
        pl[ql]["Average"].add(Write(label='var'), connector_out='final_variance')
        pl[ql]["Average"]["var"].groupname = ql + '-main'
        pl[ql]["Average"]["var"].datasetname = 'variance'
        if save_rr:
            pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr"))
        if plotting:
            pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0))
            pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average")
        
    # needed for two-qubit state reconstruction
    pl.add_correlator(pl['q2'], pl['q3'])
    pl['q2']['Correlate'].add(Average(label='corr'))
    pl['q2']['Correlate']['Average'].add(Write(label='corr_write'))
    pl['q2']['Correlate']['Average'].add(Write(label='corr_var'), connector_out='final_variance')
    pl['q2']['Correlate']['Average']['corr_write'].groupname = 'correlate'
    pl['q2']['Correlate']['Average']['corr_var'].groupname = 'correlate'
    pl['q2']['Correlate']['Average']['corr_var'].datasetname = 'variance'
        
    return pl