In [2]:
import seaborn as sns
import metapack as mp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display 

%matplotlib inline
sns.set_context('notebook')
mp.jupyter.init()

In [3]:
pkg = mp.jupyter.open_package()
#pkg = mp.jupyter.open_source_package()
pkg


Out[3]:

sandiego.gov-cityiq_parking-1 Last Update: 2019-02-13T01:50:32

__

Contacts


In [4]:
import datetime
import statsmodels.formula.api as sm
from cityiq import Config, CityIq
from cityiq.scrape import EventScraper
tz = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
from pathlib import Path

config = Config() # Assumes ~/.city-iq.yaml

c = CityIq(config) # Maybe will need this later

# Scrape events from Sept 2018
start_time = datetime.datetime(2018, 9, 1, 0, 0, tzinfo = tz)

s = EventScraper(config, start_time,  ['PKIN', 'PKOUT'])

In [5]:
import metapack as mp
pkg = mp.open_package('http://library.metatab.org/sandiego.gov-cityiq_parking-1.csv')
pkg


Out[5]:

San Diego Parking Time Series

sandiego.gov-cityiq_parking-1 Last Update: 2019-02-16T04:36:54

15 minute interval parking utilization for 1600 parking zones in San Diego city.

This datasets is compiled from parking events scraped from the San Diego CityIQ smart streetmap system, via the cityiq Python package. The dataset is compiled from PKIN and PKOUT events between the dates of Sept 2018 and Feb 2019 for the whole SaN Diego system.

The dataset is heavily processed to eliminate duplicate events because there are many spurious events, but an excess of PKIN events. When computing the number of cars parked in all parking zones, the excess of PKIN events results in about 60,000 extra cars per month. These issues are explored in an Jupyter Notebook

Processing

These data were produced with these programs:

    $ pip install cityiq
    $ ciq_config -w
    $ # Edit .cityiq-config.yaml with client-id and secret
    # Scrape PKIN and PKOUT from Sept 2018 to present
    $ $ ciq_events -s -e PKIN -e PKOUT -t 20190901
    # Split event dump in to event-location csv files
    $ ciq_events -S
    # Deduplicate and normalize
    $ ciq_events -n

The last step, deduplication and normalization, involves these steps:

  • Group events by event type, location and 1 second period and select only 1 record from each group
  • Collect runs of a events of one type and select only the first record of the run, up to a run of 4 minutes long
  • For each location, compute the cumulative sum of in and outs ( calculating the number of cars in the zone ) then create a rolling 2-day average. Subtract off the average.

The third step is demonstrated in this image:

The blue line is the original utilization for a single location, showing the larger number of PKIN events than PKOUT events. The red line is the 2-day rolling average, and the green line is after subtracting the 2-dat rolling average.

In the final dataset, the data for the blue line is in the cs column, which is created from the cumulative sum of the delta column. The green line is the data in the cs_norm column, which is differentiated to create the delta_normcolumn.

For most purpuses you should use cs_norm and delta_norm.

Contacts

References


In [7]:
r = pkg.reference('parking_events')


Out[7]:
NoneType

In [9]:
pkg.references()


Out[9]:
[<Resource: sandiego.gov-cityiq_parking-1.csv 27:1 root.datafile http://ds.civicknowledge.org/sandiego.gov/cityiq/cityiq-PKIN_PKOUT-20180901-20190201.csv.zip ['parking_events', 'Parking events', '']>]

In [ ]: