This notebook covers somewhat more advanced examples for using DataPath
s. It assumes that you understand
the concepts presented in the previous example notebooks.
You should also read the ERMrest documentation and the derivapy wiki. There are more advanced concepts in this notebook that are demonstrated but not fully (re)explained here, as the concepts are explained in other documentation.
The examples require that you understand a little bit about the example catalog data model, which in this case manages data for biological experiments.
'dataset'
: represents a unit of data usually for a study or set of experiments;'biosample'
: a biosample (describes biological details of a specimen);'replicate'
: a replicate (describes both bio- and technical-replicates);'experiment'
: a bioassay (any type of experiment or assay; e.g., imaging, RNA-seq, ChIP-seq, etc.).dataset <- biosample
: A dataset may have one to many biosamples. I.e., there is a
foreign key reference from biosample to dataset.dataset <- experiment
: A dataset may have one to many experiments. I.e., there
is a foreign key reference from experiment to dataset.experiment <- replicate
: An experiment may have one to many replicates. I.e., there is a
foreign key reference from replicate to experiment.
In [1]:
# Import deriva modules and pandas DataFrame (for use in examples only)
from deriva.core import ErmrestCatalog, get_credential
from pandas import DataFrame
In [2]:
# Connect with the deriva catalog
protocol = 'https'
hostname = 'www.facebase.org'
catalog_number = 1
credential = None
# If you need to authenticate, use Deriva Auth agent and get the credential
# credential = get_credential(hostname)
catalog = ErmrestCatalog(protocol, hostname, catalog_number, credential)
In [3]:
# Get the path builder interface for this catalog
pb = catalog.getPathBuilder()
# Get some local variable handles to tables for convenience
dataset = pb.isa.dataset
experiment = pb.isa.experiment
biosample = pb.isa.biosample
replicate = pb.isa.replicate
Proceed with caution
For compactness, Table
objects (and TableAlias
objects) provide DataPath
-like methods. E.g., link(...)
, filter(...)
, and entities(...)
, which will implicitly create DataPath
s rooted at the table and return the newly created path. These operations return
the new DataPath
rather than mutating the Table
(or TableAlias
) objects.
In [4]:
entities = dataset.filter(dataset.released == True).entities()
len(entities)
Out[4]:
In [5]:
path = dataset.alias('D').path
path.link(experiment).link(replicate)
results = path.attributes(path.D)
print(len(results))
print(results.uri)
It is important to remember that the attributes(...)
method returns a result set based on the entity type of the last elmenent of the path. In this example that means the number of results will be determined by the number of unique rows in the replicate table instance in the path created above, as the last link method used the replicate table.
In [6]:
results = path.attributes(path.D,
path.experiment.experiment_type,
path.replicate)
print(len(results))
print(results.uri)
If you want to base the results on a different entity, you can introduce a table instance alias into the end of the path, before calling the attributes function. In this case, even though we are asking for the same attributes, we are getting the set of datasets, not the set of replicates. Also, since we are including the attributes from dataset in our query, we know that we will not be seeing any duplicate rows.
In [7]:
results = path.D.attributes(path.D,
path.experiment.experiment_type,
path.replicate)
print(len(results))
print(results.uri)
In [8]:
path = dataset.link(experiment).filter(experiment.molecule_type == None)
print(path.uri)
print(len(path.entities()))
In [9]:
path = dataset.filter(dataset.description.ciregexp('palate'))
print(path.uri)
print(len(path.entities()))
Use the "inverse" ('~
') operator to negate a filter. Negation works against simple comparison filters as demonstrated above as well as on logical operators to be discussed next. You must wrap the comparison or logical operators in an extra parens to use the negate operation, e.g., "~ (...)
".
In [10]:
path = dataset.filter( ~ (dataset.description.ciregexp('palate')) )
print(path.uri)
print(len(path.entities()))
This example shows how to combine two comparisons with a conjuncting (i.e., and
operator). Because Python's logical-and (and
) keyword cannot be overloaded, we instead overload the bitwise-and (&
) operator. This approach has become customary among many similar data access libraries.
In [11]:
path = dataset.link(biosample).filter(
((biosample.species == 'NCBITAXON:10090') & (biosample.anatomy == 'UBERON:0002490')))
print(path.uri)
In [12]:
DataFrame(path.entities())
Out[12]:
In [13]:
path = dataset.link(biosample).filter(
((biosample.species == 'NCBITAXON:10090') & (biosample.anatomy == 'UBERON:0002490')) |
((biosample.specimen == 'FACEBASE:1-4GNR') & (biosample.stage == 'FACEBASE:1-4GJA')))
print(path.uri)
In [14]:
DataFrame(path.entities())
Out[14]:
Filtering a path does not have to be done at the end of a path. In fact, the initial intention of the ERMrest URI was to mimick "RESTful" semantics where a RESTful "resource" is identified, then filtered, then a "sub-resource" is identified, and then filtered, and so on.
In [15]:
path = dataset.filter(dataset.release_date >= '2017-01-01') \
.link(experiment).filter(experiment.experiment_type == 'OBI:0001271') \
.link(replicate).filter(replicate.bioreplicate_number == 1)
print(path.uri)
In [16]:
DataFrame(path.entities())
Out[16]:
Up until now, the examples have shown how to link entities via implicit join predicates. That is, we knew there existed a foriegn key reference constraint between foreign keys of one entity and keys of another entity. We needed only to ask ERMrest to link the entities in order to get the linked set.
The problem with implicit links is that it become ambiguous if there are more than one foreign key reference between tables. To support these situations, ERMrest and the DataPath
's link(...)
method can specify the columns to use for the link condition, explicitly.
The structure of the on
clause is:
In [17]:
path = dataset.link(experiment, on=(dataset.RID==experiment.dataset))
print(path.uri)
IMPORTANT Not all tables are related by foreign key references. ERMrest does not allow arbitrary relational joins. Tables must be related by a foreign key reference in order to link them in a data path.
In [18]:
DataFrame(path.entities().fetch(limit=3))
Out[18]:
In [19]:
path = dataset.link(biosample.alias('S'), on=(dataset.RID==biosample.dataset))
print(path.uri)
Notice that we cannot use the alias right away in the on
clause because it was not bound to the path until after the link(...)
operation was performed.
Up until now, the examples have shown "link
s" with inner join semantics. Outer join semantics can be expressed as part of explicit column links, and only when using explicit column links.
The link(...)
method accepts a "join_type
" parameter, i.e., ".link(... join_type=TYPE)
", where TYPE may be 'left'
, 'right'
, 'full'
, and defaults to ''
which indicates inner join type.
By 'left
' outer joining in the link from 'dataset'
to 'experiment
' and to 'biosample'
, and then reseting the context of the path to 'dataset'
, the following path gives us a reference to 'dataset'
entities that whether or not they have any experiments or biosamples.
In [20]:
# Notice in between `link`s that we have to reset the context back to `dataset` so that the
# second join is also left joined from the dataset table instance.
path = dataset.link(experiment.alias('E'), on=dataset.RID==experiment.dataset, join_type='left') \
.dataset \
.link(biosample.alias('S'), on=dataset.RID==biosample.dataset, join_type='left') \
# Notice that we have to perform the attribute fetch from the context of the `path.dataset`
# table instance.
results = path.dataset.attributes(path.dataset.RID,
path.dataset.title,
path.E.experiment_type,
path.S.species)
print(results.uri)
len(results)
Out[20]:
We can see above that we have a full set of datasets whether or not they have any experiments with biosamples. For further evidence, we can convert to a DataFrame and look at a slice of its entries. Note that the biosample's 'species' and 'stage' attributes do not exist for some results (i.e., NaN
) because those attributes did not exist for the join condition.
In [21]:
DataFrame(results)[:10]
Out[21]:
You may have noticed that in the examples above, the 'species' and 'experiment_type' attributes are identifiers ('CURIE's to be precise). We may want to construct filters on our datasets based on these categories. This can be used for "faceted search" modes and can be useful even within the context of programmatic access to data in the catalog.
Let's say we want to find all of the biosamples in our catalog where their species are 'Mus musculus' and their age stage are 'E10.5'.
We need to extend our understanding of the data model with the following tables that are related to 'biosample
'.
isa.biosample.species -> vocab.species
: the biosample table has a foreign key reference to the 'species
' table.isa.biosample.stage -> vocab.stage
: the biosample table has a foreign key reference to the 'stage
' table.We may say that species
and stage
are related to the biosample
table in the sense that biosample
has a direct foreign key relationship from it to them.
For convenience, we will get local variables for the species and stage tables.
In [22]:
species = pb.vocab.species
stage = pb.vocab.stage
First, let's link samples with species and filter on the term "Mus musculus" (i.e., "mouse").
In [23]:
# Here we have to use the container `columns_definitions` because `name` is reserved
path = biosample.alias('S').link(species).filter(species.column_definitions['name'] == 'Mus musculus')
print(path.uri)
Now the context of the path is the species
table instance, but we need to link from the biosample
to the age stage
table.
To do so, we reference the biosample
table instance, in this case using its alias S
. Then we link off of that table instance which updates the path
itself.
In [24]:
path.S.link(stage).filter(stage.column_definitions['name'] == 'E10.5')
print(path.uri)
Now, the path context is the age stage
table instance, but we wanted to get the entities for the biosample
table. To do so, again we will reference the biosample
table instance by the alias S
we used. From there, we will call the entities(...)
method to get the samples.
In [25]:
results = path.S.attributes(path.S.RID,
path.S.collection_date,
path.species.column_definitions['name'].alias('species'),
path.species.column_definitions['uri'].alias('species_uri'),
path.stage.column_definitions['name'].alias('stage'),
path.stage.column_definitions['uri'].alias('stage_uri'))
print(results.uri)
In [26]:
DataFrame(results)
Out[26]:
Now support you would like to aggregate all of the vocabulary terms associated with a Dataset. Here, we examine what happens when you have a model such that dataset <- dataset_VOCAB -> VOCAB
where VOCAB
is a placeholder for a table that includes a vocabulary term set. These tables typically have a name
column for the human-readable preferred label to go along with the formal URI or CURIE of the concept class.
In [27]:
# We need to import the `ArrayD` aggregate function for this example.
from deriva.core.datapath import ArrayD
# For convenience, get python objects for the additional tables.
dataset_organism = pb.isa.dataset_organism
dataset_experiment_type = pb.isa.dataset_experiment_type
species = pb.vocab.species
experiment_type = pb.vocab.experiment_type
# Start by doing a couple left outer joins on the dataset-term association tables, then link
# (i.e., inner join) the associated vocabulary term table, then reset the context back to the
# dataset table.
path = dataset.link(dataset_organism, on=dataset.id==dataset_organism.dataset_id, join_type='left') \
.link(species) \
.dataset \
.link(dataset_experiment_type, on=dataset.id==dataset_experiment_type.dataset_id, join_type='left') \
.link(experiment_type)
# Again, notice that we reset the context to the `dataset` table alias so that we will retrieve
# dataset entities based on the groupings to be defined next. For the groupby key we will use the
# dataset.RID, but for this example any primary key would work. Then we will get aggregate arrays
# of the linked vocabulary tables.
results = path.dataset.groupby(dataset.RID).attributes(
dataset.title,
ArrayD(path.species.column_definitions['name']).alias('species'),
ArrayD(path.experiment_type.column_definitions['name']).alias('experiment_type')
)
#results = path.dataset.entities()
print(results.uri)
print(len(results))
In [28]:
DataFrame(results.fetch(limit=20))
Out[28]:
In [ ]: