In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1968 through the current day. It contains the value for the price of gold, as sourced by the Deutsche Bundesbank Data Repository. We access this data via the API provided by Quandl. See Quandl's detail page for this set for more information.
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
Pipeline Overview section of this notebook or head straight to Pipeline Overview
In [1]:
# import the dataset
from quantopian.interactive.data.quandl import bundesbank_bbk01_wt5511 as dataset
# Since this data is public domain and provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
In [2]:
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
Out[2]:
In [3]:
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
Out[3]:
In [4]:
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Out[4]:
Let's go plot it for fun.
In [8]:
gold_df = odo(dataset, pd.DataFrame)
gold_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Price of Gold")
plt.title("Gold Price")
plt.legend().set_visible(False)
The data points between 2007 and 2015 are missing because the number of results is limited to 10,000. Let's narrow the timeframe to get a complete picture of recent prices.
In [7]:
small_df = odo(dataset[dataset.asof_date >= '2002-01-01'], pd.DataFrame)
small_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Price of Gold")
plt.title("Gold Price")
plt.legend().set_visible(False)
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(gold.value.latest, 'value')
In [1]:
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
In [2]:
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
In [3]:
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (gold,):
_print_fields(data)
print "---------------------------------------------------\n"
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
In [4]:
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(gold.value.latest, 'value')
In [5]:
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
Out[5]:
In [6]:
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Out[6]:
Taking what we've seen from above, let's see how we'd move that into the backtester.
In [11]:
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
# Import the datasets available
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Add pipeline factors
pipe.add(gold.value.latest, 'value')
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our data pipeline factor library