Getting Analytic Feed Results

This notebook shows how to paginate through Planet Analytic Feed Results for an existing analytics Subscription to construct a combined geojson feature collection that can be imported into geospatial analysis tools.

Setup

To use this notebook, you need an api key for a Planet account with access to the Analytics API.

API Key and Test Connection

Set API_KEY below if it is not already in your notebook as an environment variable. See the Analytics API Docs for more details on authentication.


In [ ]:
import os
import requests

# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')

# construct auth tuple for use in the requests library
BASIC_AUTH = (API_KEY, '')
BASE_URL = "https://api.planet.com/analytics/"

subscriptions_list_url = BASE_URL + 'subscriptions' + '?limit=1000'
resp = requests.get(subscriptions_list_url, auth=BASIC_AUTH)
if resp.status_code == 200:
    print('Yay, you can access the Analytics API')
    subscriptions = resp.json()['data']
    print('Available subscriptions:', len(subscriptions))
else:
    print('Something is wrong:', resp.content)

Specify Analytics Subscription of Interest

Below we will list your available subscription ids and some metadata in a dataframe and then select a subscription of interest.


In [ ]:
import pandas as pd
pd.options.display.max_rows = 1000
df = pd.DataFrame(subscriptions)
df['start'] = pd.to_datetime(df['startTime']).dt.date
df['end'] = pd.to_datetime(df['endTime']).dt.date
df[['id', 'title', 'description', 'start', 'end']]

Pick a subscription from which to pull results, and replace the ID below.


In [ ]:
# This example ID is for a subscription of ship detections in the Port of Oakland
# You can replace this ID with your own subscription ID
SUBSCRIPTION_ID = '9db92275-1d89-4d3b-a0b6-68abd2e94142'

Getting subscription results

In this section, we will make sure that we can get data from the subscription of interest by fetching the latest page of results.


In [ ]:
import json

# Construct the url for the subscription's results collection
subscription_results_url = BASE_URL + 'collections/' + SUBSCRIPTION_ID + '/items'
print("Request URL: {}".format(subscription_results_url))

# Get subscription results collection
resp = requests.get(subscription_results_url, auth=BASIC_AUTH)
if resp.status_code == 200:
    print('Yay, you can access analytic feed results!')
    subscription_results = resp.json()
    print(json.dumps(subscription_results, sort_keys=True, indent=4))
else:
    print('Something is wrong:', resp.content)

Pagination

The response json above will only include the most recent 250 detections by default. For subscriptions with many results, you can page through


In [ ]:
print(len(subscription_results['features']))

More results can be fetched by following the next link. Let's look at the links section of the response:


In [ ]:
subscription_results['links']

To get more results, we will want the link with a rel of next


In [ ]:
def get_next_link(results_json):
    """Given a response json from one page of subscription results, get the url for the next page of results."""
    for link in results_json['links']:
        if link['rel'] == 'next':
            return link['href']
    return None

In [ ]:
next_link = get_next_link(subscription_results)
print('next page url: ' + next_link)

Using this url, we can fetch the next page of results


In [ ]:
next_results = requests.get(next_link, auth=BASIC_AUTH).json()
print(json.dumps(next_results, sort_keys=True, indent=4))

Aggregating results

Each page of results comes as one feature collection. We can combine the features from different pages of results into one big feature collection. Below we will page through all results in the subscription from the past 3 months and make a combined feature collection.

Results in the API are ordered by a created timestamp. This corresponds the time that the feature was published to a Feed and does not necessarily match the observed timestamp in the feature's properties, which corresponds to when the source imagery for a feature was collected.


In [ ]:
latest_feature = subscription_results['features'][0]
creation_datestring = latest_feature['created']
print('latest feature creation date:', creation_datestring)

In [ ]:
from dateutil.parser import parse
# this date string can be parsed as a datetime and converted to a date
latest_date = parse(creation_datestring).date()
latest_date

In [ ]:
from datetime import timedelta
min_date = latest_date - timedelta(days=90)
print('Aggregate all detections from after this date:', min_date)

In [ ]:
feature_collection = {'type': 'FeatureCollection', 'features': []}
next_link = subscription_results_url

while next_link:
    results = requests.get(next_link, auth=BASIC_AUTH).json()
    next_features = results['features']
    if next_features:
        latest_feature_creation = parse(next_features[0]['created']).date()
        earliest_feature_creation = parse(next_features[-1]['created']).date()
        print('Fetched {} features fetched ({}, {})'.format(
            len(next_features), earliest_feature_creation, latest_feature_creation))
        feature_collection['features'].extend(next_features)
        next_link = get_next_link(results)
    else:
        next_link = None

print('Total features: {}'.format(len(feature_collection['features'])))

Saving Results

We can now save the combined geojson feature collection to a file.


In [ ]:
from IPython.display import FileLink, FileLinks
os.makedirs('data', exist_ok=True)
filename = 'data/collection_{}.geojson'.format(SUBSCRIPTION_ID)
with open(filename, 'w') as file:
    json.dump(feature_collection, file)

FileLink(filename)

After downloading the aggregated geojson file with the file link above, try importing the data into a geojson-compatible tool for visualization and exploration:

The saved geojson file can also be used to make a geopandas dataframe.


In [ ]:
import geopandas as gpd
gpd.read_file(filename)

In [ ]: