FireCARES ops management notebook

Using this notebook

In order to use this notebook, a single production/test web node will need to be bootstrapped w/ ipython and django-shell-plus python libraries. After bootstrapping is complete and while forwarding a local port to the port that the ipython notebook server will be running on the node, you can open the ipython notebook using the token provided in the SSH session after ipython notebook server start.

Bootstrapping a prod/test node

To bootstrap a specific node for use of this notebook, you'll need to ssh into the node and forward a local port # to localhost:8888 on the node.

e.g. ssh firecares-prod -L 8890:localhost:8888 to forward the local port 8890 to 8888 on the web node, assumes that the "firecares-prod" SSH config is listed w/ the correct webserver IP in your ~/.ssh/config

  • sudo chown -R firecares: /run/user/1000 as the ubuntu user
  • sudo su firecares
  • workon firecares
  • pip install -r dev_requirements.txt
  • python manage.py shell_plus --notebook --no-browser --settings=firecares.settings.local

At this point, there will be a mention of "The jupyter notebook is running at: http://localhost:8888/?token=XXXX". Copy the URL, but be sure to use the local port that you're forwarding instead for the connection vs the default of 8888 if necessary.

Since the ipython notebook server supports django-shell-plus, all of the FireCARES models will automatically be imported. From here any command that you execute in the notebook will run on the remote web node immediately.

Fire department management

Re-generate performance score for a specific fire department

Useful for when a department's FDID has been corrected. Will do the following:

  1. Pull NFIRS counts for the department (cached in FireCARES database)
  2. Generate fires heatmap
  3. Update department owned census tracts geom
  4. Regenerate structure hazard counts in jurisdiction
  5. Regenerate population_quartiles materialized view to get safe grades for department
  6. Re-run performance score for the department

In [9]:
import psycopg2
from firecares.tasks import update
from firecares.utils import dictfetchall
from django.db import connections
from django.conf import settings
from django.core.management import call_command
from IPython.display import display
import pandas as pd

In [2]:
fd = {'fdid': '18M04', 'state': 'WA'}
nfirs = connections['nfirs']
department = FireDepartment.objects.filter(**fd).first()
fid = department.id
print 'FireCARES id: %s' % fid
print 'https://firecares.org/departments/%s' % fid


FireCARES id: 92616
https://firecares.org/departments/92616

In [6]:
%%time
# Get raw fire incident counts (prior to intersection with )

with nfirs.cursor() as cur:
    cur.execute("""
        select count(1), fdid, state, extract(year from inc_date) as year
        from fireincident where fdid=%(fdid)s and state=%(state)s
        group by fdid, state, year
        order by year""", fd)
    fire_years = dictfetchall(cur)
    display(fire_years)
    print 'Total fires: %s\n' % sum([x['count'] for x in fire_years])


[{'count': 24L, 'fdid': u'18M04', 'state': u'WA', 'year': 2003.0},
 {'count': 31L, 'fdid': u'18M04', 'state': u'WA', 'year': 2004.0},
 {'count': 43L, 'fdid': u'18M04', 'state': u'WA', 'year': 2005.0},
 {'count': 52L, 'fdid': u'18M04', 'state': u'WA', 'year': 2006.0},
 {'count': 38L, 'fdid': u'18M04', 'state': u'WA', 'year': 2007.0},
 {'count': 19L, 'fdid': u'18M04', 'state': u'WA', 'year': 2008.0},
 {'count': 41L, 'fdid': u'18M04', 'state': u'WA', 'year': 2009.0},
 {'count': 36L, 'fdid': u'18M04', 'state': u'WA', 'year': 2010.0},
 {'count': 32L, 'fdid': u'18M04', 'state': u'WA', 'year': 2011.0},
 {'count': 56L, 'fdid': u'18M04', 'state': u'WA', 'year': 2014.0},
 {'count': 49L, 'fdid': u'18M04', 'state': u'WA', 'year': 2015.0}]
Total fires: 421

CPU times: user 8.68 ms, sys: 69 µs, total: 8.74 ms
Wall time: 35.3 ms

In [7]:
%%time
# Get building fire counts after structure hazard level calculations
sql = update.STRUCTURE_FIRES

print sql

with nfirs.cursor() as cur:
    cur.execute(sql, dict(fd, years=tuple([x['year'] for x in fire_years])))
    fires_by_hazard_level = dictfetchall(cur)
    display(fires_by_hazard_level)
    print 'Total geocoded fires: %s\n' % sum([x['count'] for x in fires_by_hazard_level])


SELECT count(1) as count, extract(year from a.alarm) as year, COALESCE(y.risk_category, 'N/A') as risk_level
FROM joint_buildingfires a
LEFT JOIN
    (SELECT state, fdid, inc_date, inc_no, exp_no, x.parcel_id, x.risk_category
        FROM ( SELECT *
            FROM joint_incidentaddress a
            LEFT JOIN parcel_risk_category_local using (parcel_id)
        ) AS x
    ) AS y
USING (state, fdid, inc_date, inc_no, exp_no)
WHERE a.state = %(state)s AND a.fdid = %(fdid)s AND extract(year FROM a.inc_date) IN %(years)s
GROUP BY y.risk_category, extract(year from a.alarm)
ORDER BY extract(year from a.alarm) DESC
[{'count': 11L, 'risk_level': u'N/A', 'year': 2015.0},
 {'count': 1L, 'risk_level': u'High', 'year': 2014.0},
 {'count': 17L, 'risk_level': u'Low', 'year': 2014.0},
 {'count': 3L, 'risk_level': u'Medium', 'year': 2014.0},
 {'count': 8L, 'risk_level': u'N/A', 'year': 2014.0},
 {'count': 15L, 'risk_level': u'Low', 'year': 2011.0},
 {'count': 2L, 'risk_level': u'Medium', 'year': 2011.0},
 {'count': 2L, 'risk_level': u'N/A', 'year': 2011.0},
 {'count': 11L, 'risk_level': u'Low', 'year': 2010.0},
 {'count': 2L, 'risk_level': u'Medium', 'year': 2010.0},
 {'count': 8L, 'risk_level': u'N/A', 'year': 2010.0},
 {'count': 1L, 'risk_level': u'High', 'year': 2009.0},
 {'count': 5L, 'risk_level': u'Low', 'year': 2009.0},
 {'count': 2L, 'risk_level': u'Medium', 'year': 2009.0},
 {'count': 13L, 'risk_level': u'N/A', 'year': 2009.0},
 {'count': 5L, 'risk_level': u'Low', 'year': 2008.0},
 {'count': 3L, 'risk_level': u'N/A', 'year': 2008.0},
 {'count': 1L, 'risk_level': u'High', 'year': 2007.0},
 {'count': 12L, 'risk_level': u'Low', 'year': 2007.0},
 {'count': 4L, 'risk_level': u'Medium', 'year': 2007.0},
 {'count': 12L, 'risk_level': u'Low', 'year': 2006.0},
 {'count': 5L, 'risk_level': u'Medium', 'year': 2006.0},
 {'count': 6L, 'risk_level': u'N/A', 'year': 2006.0},
 {'count': 1L, 'risk_level': u'High', 'year': 2005.0},
 {'count': 5L, 'risk_level': u'Low', 'year': 2005.0},
 {'count': 9L, 'risk_level': u'Medium', 'year': 2005.0},
 {'count': 4L, 'risk_level': u'N/A', 'year': 2005.0},
 {'count': 5L, 'risk_level': u'Low', 'year': 2004.0},
 {'count': 6L, 'risk_level': u'Medium', 'year': 2004.0},
 {'count': 3L, 'risk_level': u'N/A', 'year': 2004.0},
 {'count': 7L, 'risk_level': u'Low', 'year': 2003.0},
 {'count': 1L, 'risk_level': u'Medium', 'year': 2003.0},
 {'count': 2L, 'risk_level': u'N/A', 'year': 2003.0}]
Total geocoded fires: 192

CPU times: user 6.27 ms, sys: 0 ns, total: 6.27 ms
Wall time: 36.8 ms

In [37]:
sql = """
select alarm, a.inc_type, alarms,ff_death, oth_death, ST_X(geom) as x, st_y(geom) as y, COALESCE(y.risk_category, 'Unknown') as risk_category
from buildingfires a
LEFT JOIN (
    SELECT state, fdid, inc_date, inc_no, exp_no, x.geom, x.parcel_id, x.risk_category
    FROM (
        SELECT * FROM incidentaddress a
        LEFT JOIN parcel_risk_category_local using (parcel_id)
    ) AS x
) AS y
    USING (state, fdid, inc_date, inc_no, exp_no)
WHERE a.state = %(state)s and a.fdid = %(fdid)s"""

with nfirs.cursor() as cur:
    cur.execute(sql, fd)
    rows = dictfetchall(cur)
    
out_name = '{id}-building-fires.csv'.format(id=fid)
full_path = '/tmp/' + out_name

with open(full_path, 'w') as f:
    writer = csv.DictWriter(f, fieldnames=[x.name for x in cur.description])
    writer.writeheader()
    writer.writerows(rows)

# Push building fires to S3
!aws s3 cp $full_path s3://firecares-test/$out_name --acl="public-read"


upload: ../../../tmp/92616-building-fires.csv to s3://firecares-test/92616-building-fires.csv

In [80]:
update.update_nfirs_counts(fid)


updating NFIRS counts for 92616
...updated NFIRS counts for 92616

In [20]:
update.calculate_department_census_geom(fid)


No census geom - Poulsbo Fire Department (Kitsap Fire District 18) (92616)

In [19]:
# Fire counts by hazard level over all years, keep in mind that the performance score model will currently ONLY work
# hazard levels w/ 
display(pd.DataFrame(fires_by_hazard_level).groupby(['risk_level']).sum()['count'])

update.update_performance_score(fid)


risk_level
High       4
Low       94
Medium    34
N/A       60
Name: count, dtype: int64
updating performance score for 92616
Error updating DIST score: Traceback (most recent call last):
  File "firecares/tasks/update.py", line 130, in update_performance_score
    dist = dist_model(floor_extent=False, **counts)
  File "/webapps/firecares/src/fire-risk/fire_risk/models/DIST/DIST.py", line 90, in __init__
    raise NotEnoughRecords
NotEnoughRecords
.
Error updating DIST score: Traceback (most recent call last):
  File "firecares/tasks/update.py", line 130, in update_performance_score
    dist = dist_model(floor_extent=False, **counts)
  File "/webapps/firecares/src/fire-risk/fire_risk/models/DIST/DIST.py", line 425, in __init__
    beyond, **kwargs)
  File "/webapps/firecares/src/fire-risk/fire_risk/models/DIST/DIST.py", line 90, in __init__
    raise NotEnoughRecords
NotEnoughRecords
.
Error updating DIST score: Traceback (most recent call last):
  File "firecares/tasks/update.py", line 130, in update_performance_score
    dist = dist_model(floor_extent=False, **counts)
  File "/webapps/firecares/src/fire-risk/fire_risk/models/DIST/DIST.py", line 90, in __init__
    raise NotEnoughRecords
NotEnoughRecords
.
clearing High level from 92616 due to missing categories in aggregation
updating fdid: 92616 - All performance score from: 26.0 to 27.0.
...updated performance score for 92616

User management

Whitelist