Retrieve NetCDF and model gridded climate time-series for a watershed

Case study: the Sauk-Suiattle Watershed

Use this Jupyter Notebook to:

1. HydroShare setup and preparation
2. Re-establish the paths to the mapping file
3. Compute daily, monthly, and annual temperature and precipitation statistics
4. Visualize precipitation results relative to the forcing data
5. Visualize the time-series trends
6. Save results back into HydroShare






This data is compiled to digitally observe the watersheds, powered by HydroShare.
Provided by the Watershed Dynamics Group, Dept. of Civil and Environmental Engineering, University of Washington

1. Prepare HydroShare Setup and Preparation

To run this notebook, we must import several libaries. These are listed in order of 1) Python standard libraries, 2) hs_utils library provides functions for interacting with HydroShare, including resource querying, dowloading and creation, and 3) the observatory_gridded_hydromet library that is downloaded with this notebook.


In [1]:
#!conda install -c conda-forge ogh libgdal gdal pygraphviz ncurses matplotlib=2.2.3 --yes

In [8]:
# silencing warning
import warnings
warnings.filterwarnings("ignore")

# data processing
import os
import pandas as pd, numpy as np, dask

# data migration library
import ogh
import ogh_xarray_landlab as oxl
from utilities import hydroshare
from ecohydrology_model_functions import run_ecohydrology_model, plot_results
from landlab import imshow_grid, CLOSED_BOUNDARY

# modeling input params
InputFile = os.path.join(os.getcwd(),'ecohyd_inputs.yaml')

# plotting and shape libraries
import matplotlib.pyplot as plt
%matplotlib inline

In [9]:
InputFile


Out[9]:
'/home/jovyan/work/Observatory-1/tutorials/ecohyd_inputs.yaml'

In [3]:
# initialize ogh_meta
meta_file = dict(ogh.ogh_meta())
sorted(meta_file.keys())


Out[3]:
['dailymet_bclivneh2013',
 'dailymet_livneh2013',
 'dailymet_livneh2015',
 'dailyvic_livneh2013',
 'dailyvic_livneh2015',
 'dailywrf_bcsalathe2014',
 'dailywrf_salathe2014']

Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.


In [4]:
notebookdir = os.getcwd()

hs=hydroshare.hydroshare()
homedir = hs.getContentPath(os.environ["HS_RES_ID"])
os.chdir(homedir)


Adding the following system variables:
   HS_USR_NAME = jphuong
   HS_RES_ID = 87dc5742cf164126a11ff45c3307fd9d
   HS_RES_TYPE = compositeresource
   JUPYTER_HUB_IP = jupyter.cuahsi.org

These can be accessed using the following command: 
   os.environ[key]

   (e.g.)
   os.environ["HS_USR_NAME"]  => jphuong
Successfully established a connection with HydroShare

If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource.

2. Get list of gridded climate points for the watershed

For visualization purposes, we will also remap the study site shapefile, which is stored in HydroShare at the following url: https://www.hydroshare.org/resource/c532e0578e974201a0bc40a37ef2d284/. Since the shapefile was previously migrated, we can select 'N' for no overwriting.

In the usecase1 notebook, the treatgeoself function identified the gridded cell centroid coordinates that overlap with our study site. These coordinates were documented within the mapping file, which will be remapped here. In the usecase2 notebook, the downloaded files were cataloged within the mapping file, so we will use the mappingfileSummary function to characterize the files available for Sauk-Suiattle for each gridded data product.


In [5]:
"""
1/16-degree Gridded cell centroids
"""
# List of available data
hs.getResourceFromHydroShare('ef2d82bf960144b4bfb1bae6242bcc7f')
NAmer = hs.content['NAmer_dem_list.shp']


"""
Sauk
"""
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']

# reproject the shapefile into WGS84
ogh.reprojShapefile(sourcepath=sauk)


This resource already exists in your userspace.
ef2d82bf960144b4bfb1bae6242bcc7f/
|-- ef2d82bf960144b4bfb1bae6242bcc7f/
|   |-- bagit.txt
|   |-- manifest-md5.txt
|   |-- readme.txt
|   |-- tagmanifest-md5.txt
|   |-- data/
|   |   |-- resourcemap.xml
|   |   |-- resourcemetadata.xml
|   |   |-- contents/
|   |   |   |-- NAmer_dem_list.cpg
|   |   |   |-- NAmer_dem_list.dbf
|   |   |   |-- NAmer_dem_list.prj
|   |   |   |-- NAmer_dem_list.sbn
|   |   |   |-- NAmer_dem_list.sbx
|   |   |   |-- NAmer_dem_list.shp
|   |   |   |-- NAmer_dem_list.shx

Do you want to overwrite these data [Y/n]? n
Found the following file(s) associated with this HydroShare resource.
NAmer_dem_list.cpg
NAmer_dem_list.dbf
NAmer_dem_list.prj
NAmer_dem_list.sbn
NAmer_dem_list.sbx
NAmer_dem_list.shp
NAmer_dem_list.shx
These files are stored in a dictionary called hs.content for your convenience. To access a file, simply issue the following command where MY_FILE is one of the files listed above:
hs.content["MY_FILE"] 
This resource already exists in your userspace.
c532e0578e974201a0bc40a37ef2d284/
|-- c532e0578e974201a0bc40a37ef2d284/
|   |-- bagit.txt
|   |-- manifest-md5.txt
|   |-- readme.txt
|   |-- tagmanifest-md5.txt
|   |-- data/
|   |   |-- resourcemap.xml
|   |   |-- resourcemetadata.xml
|   |   |-- contents/
|   |   |   |-- wbdhub12_17110006_WGS84_Basin.cpg
|   |   |   |-- wbdhub12_17110006_WGS84_Basin.shp
|   |   |   |-- wbdhub12_17110006_WGS84_Basin.shx
|   |   |   |-- wbdhub12_17110006_WGS84_Basin.dbf
|   |   |   |-- wbdhub12_17110006_WGS84_Basin.prj

Do you want to overwrite these data [Y/n]? n
Found the following file(s) associated with this HydroShare resource.
wbdhub12_17110006_WGS84_Basin.cpg
wbdhub12_17110006_WGS84_Basin.shp
wbdhub12_17110006_WGS84_Basin.shx
wbdhub12_17110006_WGS84_Basin.dbf
wbdhub12_17110006_WGS84_Basin.prj
These files are stored in a dictionary called hs.content for your convenience. To access a file, simply issue the following command where MY_FILE is one of the files listed above:
hs.content["MY_FILE"] 

Summarize the file availability from each watershed mapping file


In [6]:
%%time

# map the mappingfiles from usecase1
mappingfile1=ogh.treatgeoself(shapefile=sauk, NAmer=NAmer, buffer_distance=0.06,
                              mappingfile=os.path.join(homedir,'Sauk_mappingfile.csv'))


(99, 4)
   FID       LAT      LONG_    ELEV
0    0  48.53125 -121.59375  1113.0
1    1  48.46875 -121.46875   646.0
2    2  48.46875 -121.53125   321.0
3    3  48.46875 -121.59375   164.0
4    4  48.46875 -121.65625   369.0
CPU times: user 23.8 s, sys: 515 ms, total: 24.4 s
Wall time: 24.4 s

3. Compare Hydrometeorology

This section performs computations and generates plots of the Livneh 2013 and Salathe 2014 mean temperature and mean total monthly precipitation in order to compare them with each other. The generated plots are automatically downloaded and saved as .png files within the "homedir" directory.

Let's compare the Livneh 2013 and Salathe 2014 using the period of overlapping history.


In [7]:
help(ogh.getDailyMET_livneh2013)


Help on function getDailyMET_livneh2013 in module ogh.ogh:

getDailyMET_livneh2013(homedir, mappingfile, subdir='livneh2013/Daily_MET_1915_2011/raw', catalog_label='dailymet_livneh2013')
    Get the Livneh el al., 2013 Daily Meteorology files of interest using the reference mapping file
    
    homedir: (dir) the home directory to be used for establishing subdirectories
    mappingfile: (dir) the file path to the mappingfile, which contains the LAT, LONG_, and ELEV coordinates of interest
    subdir: (dir) the subdirectory to be established under homedir
    catalog_label: (str) the preferred name for the series of catalogged filepaths


In [8]:
help(oxl.get_x_dailymet_Livneh2013_raw)


Help on function get_x_dailymet_Livneh2013_raw in module ogh_xarray_landlab:

get_x_dailymet_Livneh2013_raw(homedir, spatialbounds, subdir='livneh2013/Daily_MET_1915_2011/raw_netcdf', nworkers=4, start_date='1915-01-01', end_date='2011-12-31', rename_timelatlong_names={'lat': 'LAT', 'lon': 'LON', 'time': 'TIME'}, file_prefix='sp_', replace_file=True)
    get Daily MET data from Livneh et al. (2013) using xarray on netcdf files

NetCDF retrieval and clipping to a spatial extent

The function get_x_dailywrf_salathe2014 retrieves and clips NetCDF files archived within the UW Rocinante NNRP repository. This archive contains daily data from January 1970 through December 1979 (10 years). Each netcdf file is comprised of meteorologic and VIC hydrologic outputs for a calendar month. The expected number of files would be 360 files (12 months for 30 years).

In the code chunk below, 40 parallel workers will be initialized to distribute file retrieval and spatial clipping tasks. For each worker, they will wget the requested file, clip the netcdf file to gridded cell centroids within the the provided bounding box, then return the location of the spatially clipped output files.

Provide the home and subdirectory where the cropped NetCDF files will be stored. Also provide the spatial bounds (in WGS84) to crop the NetCDF files upon download. Finally, provide the number of workers to carry out the download tasks, and the start and end date of the files of interest.


In [9]:
maptable, nstations = ogh.mappingfileToDF(mappingfile1)
spatialbounds = {'minx':maptable.LONG_.min(), 'maxx':maptable.LONG_.max(),
                 'miny':maptable.LAT.min(), 'maxy':maptable.LAT.max()}

outputfiles = oxl.get_x_dailymet_Livneh2013_raw(homedir=homedir,
                                                subdir='livneh2013/Daily_MET_1970_1970/raw_netcdf',
                                                spatialbounds=spatialbounds,
                                                nworkers=6,
                                                start_date='1970-01-01', end_date='1970-12-31')


Number of gridded data files:99
Minimum elevation: 164.0m
Mean elevation: 1151.040404040404m
Maximum elevation: 2216.0m
[                                        ] | 0% Completed | 21.9sMeteorology_Livneh_CONUSExt_v.1.2_2013.197002.nc unzipped
[                                        ] | 0% Completed | 22.0sMeteorology_Livneh_CONUSExt_v.1.2_2013.197006.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197011.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197008.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197004.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197012.nc unzipped
[####################                    ] | 50% Completed | 45.5sMeteorology_Livneh_CONUSExt_v.1.2_2013.197009.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197010.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197005.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197007.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197001.nc unzipped
Meteorology_Livneh_CONUSExt_v.1.2_2013.197003.nc unzipped
[########################################] | 100% Completed | 46.0s

Convert collection of NetCDF files into a collection of ASCII files

Provide the home and subdirectory where the ASCII files will be stored, the source_directory of netCDF files, and the mapping file to which the resulting ASCII files will be cataloged. Also, provide the Pandas Datetime code for the frequency of the time steps. Finally, provide the catalog label that will be used for the mapping file catalog and the metadata file label.


In [10]:
%%time
# convert the netCDF files into daily ascii time-series files for each gridded location
outfilelist = oxl.netcdf_to_ascii(homedir=homedir, 
                                  subdir='livneh2013/Daily_MET_1970_1970/raw_ascii', 
                                  source_directory=os.path.join(homedir, 'livneh2013/Daily_MET_1970_1970/raw_netcdf'),
                                  mappingfile=mappingfile1,
                                  temporal_resolution='D',
                                  meta_file=meta_file,
                                  catalog_label='sp_dailymet_livneh_1970_1970')


[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  1.3s
[########################################] | 100% Completed |  1.4s
CPU times: user 2.14 s, sys: 510 ms, total: 2.65 s
Wall time: 3.03 s

In [11]:
t1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1], 
                            listofwatershednames = ['Sauk-Suiattle river'],
                            meta_file=meta_file)

t1


Out[11]:
Watershed Sauk-Suiattle river
Median elevation in meters [range](Number of gridded cells) 1171[164-2216] (n=99)
sp_dailymet_livneh_1970_1970 1171[164-2216] (n=99)

In [12]:
# Save the metadata
ogh.saveDictOfDf(dictionaryObject=meta_file, outfilepath='test.json')

Create a dictionary of climate variables for the long-term mean (ltm).

INPUT: gridded meteorology ASCII files located from the Sauk-Suiattle Mapping file. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevation band to be included in the analsyis (max and min elevation).
OUTPUT: dictionary of dataframes where rows are temporal summaries and columns are spatial summaries


In [13]:
meta_file['sp_dailymet_livneh_1970_1970']['variable_list']


Out[13]:
['Prec', 'Tmax', 'Tmin', 'Wind']

In [14]:
%%time

ltm = ogh.gridclim_dict(mappingfile=mappingfile1,
                        metadata=meta_file,
                        dataset='sp_dailymet_livneh_1970_1970',
                        variable_list=['Prec','Tmax','Tmin'])

sorted(ltm.keys())


Number of data files within elevation range (164.0-2216.0 m): 99
[########################################] | 100% Completed |  1.6s
[########################################] | 100% Completed |  1.7s
Prec dataframe reading complete:0:00:01.920074
[########################################] | 100% Completed |  1.2s
[########################################] | 100% Completed |  1.2s
Tmax dataframe reading complete:0:00:03.306012
[########################################] | 100% Completed |  1.2s
[########################################] | 100% Completed |  1.2s
Tmin dataframe reading complete:0:00:04.696495
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
Prec_sp_dailymet_livneh_1970_1970 calculations completed in 0:00:00.431667
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
Tmax_sp_dailymet_livneh_1970_1970 calculations completed in 0:00:00.428043
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
Tmin_sp_dailymet_livneh_1970_1970 calculations completed in 0:00:00.429702
CPU times: user 4.44 s, sys: 928 ms, total: 5.37 s
Wall time: 6 s

Compute the total monthly and yearly precipitation, as well as the mean values across time and across stations

INPUT: daily precipitation for each station from the long-term mean dictionary (ltm)
OUTPUT: Append the computed dataframes and values into the ltm dictionary


In [15]:
# extract metadata
dr = meta_file['sp_dailymet_livneh_1970_1970']

# compute sums and mean monthly an yearly sums
ltm = ogh.aggregate_space_time_sum(df_dict=ltm,
                                   suffix='Prec_sp_dailymet_livneh_1970_1970',
                                   start_date=dr['start_date'],
                                   end_date=dr['end_date'])


[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.2s
[########################################] | 100% Completed |  0.1s
[########################################] | 100% Completed |  0.1s
Prec_sp_dailymet_livneh_1970_1970 calculations completed in 0:00:00.337237

In [16]:
# print the name of the analytical dataframes and values within ltm
sorted(ltm.keys())


Out[16]:
['Prec_sp_dailymet_livneh_1970_1970',
 'Tmax_sp_dailymet_livneh_1970_1970',
 'Tmin_sp_dailymet_livneh_1970_1970',
 'anomyear_Prec_sp_dailymet_livneh_1970_1970',
 'anomyear_Tmax_sp_dailymet_livneh_1970_1970',
 'anomyear_Tmin_sp_dailymet_livneh_1970_1970',
 'meanalldailysum_Prec_sp_dailymet_livneh_1970_1970',
 'meanallmonthsum_Prec_sp_dailymet_livneh_1970_1970',
 'meanallyear_Prec_sp_dailymet_livneh_1970_1970',
 'meanallyear_Tmax_sp_dailymet_livneh_1970_1970',
 'meanallyear_Tmin_sp_dailymet_livneh_1970_1970',
 'meanallyearsum_Prec_sp_dailymet_livneh_1970_1970',
 'meanbydaily_Prec_sp_dailymet_livneh_1970_1970',
 'meanbydaily_Tmax_sp_dailymet_livneh_1970_1970',
 'meanbydaily_Tmin_sp_dailymet_livneh_1970_1970',
 'meanbymonth_Prec_sp_dailymet_livneh_1970_1970',
 'meanbymonth_Tmax_sp_dailymet_livneh_1970_1970',
 'meanbymonth_Tmin_sp_dailymet_livneh_1970_1970',
 'meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970',
 'meanbyyear_Prec_sp_dailymet_livneh_1970_1970',
 'meanbyyear_Tmax_sp_dailymet_livneh_1970_1970',
 'meanbyyear_Tmin_sp_dailymet_livneh_1970_1970',
 'meanbyyearsum_Prec_sp_dailymet_livneh_1970_1970',
 'meandaily_Prec_sp_dailymet_livneh_1970_1970',
 'meandaily_Tmax_sp_dailymet_livneh_1970_1970',
 'meandaily_Tmin_sp_dailymet_livneh_1970_1970',
 'meanmonth_Prec_sp_dailymet_livneh_1970_1970',
 'meanmonth_Tmax_sp_dailymet_livneh_1970_1970',
 'meanmonth_Tmin_sp_dailymet_livneh_1970_1970',
 'meanmonthsum_Prec_sp_dailymet_livneh_1970_1970',
 'meanyear_Prec_sp_dailymet_livneh_1970_1970',
 'meanyear_Tmax_sp_dailymet_livneh_1970_1970',
 'meanyear_Tmin_sp_dailymet_livneh_1970_1970',
 'meanyearsum_Prec_sp_dailymet_livneh_1970_1970',
 'monthsum_Prec_sp_dailymet_livneh_1970_1970',
 'yearsum_Prec_sp_dailymet_livneh_1970_1970']

In [17]:
# initialize list of outputs
files=[]

# create the destination path for the dictionary of dataframes
ltm_sauk=os.path.join(homedir, 'ltm_1970_1970_sauk.json')
ogh.saveDictOfDf(dictionaryObject=ltm, outfilepath=ltm_sauk)
files.append(ltm_sauk)

# append the mapping file for Sauk-Suiattle gridded cell centroids
files.append(mappingfile1)

Visualize the "average monthly total precipitations"

INPUT: dataframe with each month as a row and each station as a column.
OUTPUT: A png file that represents the distribution across stations (in Wateryear order)


In [18]:
# # two lowest elevation locations
lowE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=164)

# one highest elevation location
highE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=2216)

# combine references together
reference_lines = highE_ref + lowE_ref
reference_lines


Out[18]:
[(69, 48.09375, -121.09375),
 (3, 48.46875, -121.59375),
 (24, 48.34375, -121.53125)]

In [19]:
ogh.renderValueInBoxplot(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],
                         outfilepath='totalMonthlyRainfall.png', 
                         plottitle='Total monthly rainfall',
                         time_steps='month',
                         wateryear=True,
                         reference_lines=reference_lines,
                         ref_legend=True,
                         value_name='Total daily precipitation (mm)',
                         cmap='seismic_r',
                         figsize=(6,6))


No handles with labels found to put in legend.
Out[19]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f3a4884a780>

In [20]:
ogh.renderValuesInPoints(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'], 
                         vardf_dateindex=12, 
                         shapefile=sauk, 
                         cmap='seismic_r',
                         outfilepath='test.png', 
                         plottitle='December total rainfall',
                         colorbar_label='Total monthly rainfall (mm)', 
                         figsize=(1.5,1.5))



In [21]:
minx2, miny2, maxx2, maxy2 = oxl.calculateUTMbounds(mappingfile=mappingfile1,
                                                    mappingfile_crs={'init':'epsg:4326'},
                                                    spatial_resolution=0.06250)

print(minx2, miny2, maxx2, maxy2)


Number of gridded data files:99
Minimum elevation: 164.0m
Mean elevation: 1151.040404040404m
Maximum elevation: 2216.0m
115.26040113876661 -87.99363603095117 65333.46979869377 76402.0986115007
<Figure size 5000x5000 with 0 Axes>

generate a raster


In [22]:
help(oxl.rasterDimensions)


Help on function rasterDimensions in module ogh_xarray_landlab:

rasterDimensions(maxx, maxy, minx=0, miny=0, dy=100, dx=100)


In [23]:
# generate a raster
raster, row_list, col_list = oxl.rasterDimensions(minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)
raster.shape


Out[23]:
(78, 68)

Higher resolution children of gridded cells

get data from Lower resolution parent grid cells to the children


In [24]:
help(oxl.mappingfileToRaster)


Help on function mappingfileToRaster in module ogh_xarray_landlab:

mappingfileToRaster(mappingfile, maxx, maxy, minx=0, miny=0, dx=100, dy=100, spatial_resolution=0.0625, mappingfile_crs={'init': 'epsg:4326'}, raster_crs={'init': 'epsg:3857'})


In [25]:
%%time

# landlab raster node crossmap to gridded cell id
nodeXmap, raster, m = oxl.mappingfileToRaster(mappingfile=mappingfile1, spatial_resolution=0.06250, 
                                           minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)


Number of gridded data files:99
Minimum elevation: 164.0m
Mean elevation: 1151.040404040404m
Maximum elevation: 2216.0m
Warning: CRS does not match!
CPU times: user 21.2 s, sys: 63.5 ms, total: 21.3 s
Wall time: 21.2 s
<Figure size 5000x5000 with 0 Axes>

In [26]:
# print the raster dimensions
raster.shape


Out[26]:
(78, 68)

In [27]:
%%time
nodeXmap.plot(column='ELEV', figsize=(10,10), legend=True)


CPU times: user 3.06 s, sys: 280 ms, total: 3.33 s
Wall time: 2.98 s
Out[27]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f3a23004e80>

In [31]:
# generate vector array of December monthly precipitation
prec_vector = ogh.rasterVector(vardf=ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],
                          vardf_dateindex=12,
                          crossmap=nodeXmap,
                          nodata=-9999)

# close-off areas without data
raster.status_at_node[prec_vector==-9999] = CLOSED_BOUNDARY

fig =plt.figure(figsize=(10,10))
imshow_grid(raster, 
            prec_vector,
            var_name='Monthly precipitation',
            var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Prec'].attrs['units'],
            color_for_closed='black', 
            cmap='seismic_r')



In [32]:
tmax_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmax_sp_dailymet_livneh_1970_1970'],
                          vardf_dateindex=12,
                          crossmap=nodeXmap,
                          nodata=-9999)

fig = plt.figure(figsize=(10,10))
imshow_grid(raster, 
            tmax_vector,
            var_name='Daily maximum temperature',
            var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmax'].attrs['units'],
            color_for_closed='black', symmetric_cbar=False, cmap='magma')



In [33]:
tmin_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmin_sp_dailymet_livneh_1970_1970'],
                          vardf_dateindex=12,
                          crossmap=nodeXmap,
                          nodata=-9999)

fig = plt.figure(figsize=(10,10))
imshow_grid(raster, 
            tmin_vector,
            var_name='Daily minimum temperature',
            var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmin'].attrs['units'],
            color_for_closed='black', symmetric_cbar=False, cmap='magma')



In [35]:
# convert a raster vector back to geospatial presentation
t4, t5 = oxl.rasterVectorToWGS(prec_vector, nodeXmap=nodeXmap, UTM_transformer=m)

In [36]:
t4.plot(column='value', figsize=(10,10), legend=True)


Out[36]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f3a1b4b7e48>

In [37]:
# this is one decade
inputvectors = {'precip_met': np.tile(ltm['meandaily_Prec_sp_dailymet_livneh_1970_1970'], 15000),
                'Tmax_met': np.tile(ltm['meandaily_Tmax_sp_dailymet_livneh_1970_1970'], 15000),
                'Tmin_met': np.tile(ltm['meandaily_Tmin_sp_dailymet_livneh_1970_1970'], 15000)}

In [42]:
%%time
(VegType_low, yrs_low, debug_low) = run_ecohydrology_model(raster,
                                                           input_data=inputvectors,
                                                           input_file=InputFile,
                                                           synthetic_storms=False,
                                                           number_of_storms=100000,
                                                           pet_method='PriestleyTaylor')


Elapsed time =  0  years
Elapsed time =  100  years
Elapsed time =  200  years
CPU times: user 2min 20s, sys: 1.17 s, total: 2min 21s
Wall time: 2min 20s

In [43]:
plot_results(raster, VegType_low, yrs_low, yr_step=yrs_low-1)
plt.show()
plt.savefig('grid_low.png')


Plotting cellular field of Plant Functional Type
Green - Grass; Red - Shrubs; Black - Trees; White - Bare
<Figure size 432x288 with 0 Axes>

Visualize the "average monthly total precipitation"

5. Save the results back into HydroShare

Using the hs_utils library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. title, abstract, keywords, content files. In addition, we must define the type of resource that will be created, in this case genericresource.

Note: Make sure you save the notebook at this point, so that all notebook changes will be saved into the new HydroShare resource.

Total files and image to migrate


In [ ]:
len(files)

In [ ]:
# for each file downloaded onto the server folder, move to a new HydroShare Generic Resource
title = 'Computed spatial-temporal summaries of two gridded data product data sets for Sauk-Suiattle'
abstract = 'This resource contains the computed summaries for the Meteorology data from Livneh et al. 2013 and the WRF data from Salathe et al. 2014.'
keywords = ['Sauk-Suiattle', 'Livneh 2013', 'Salathe 2014','climate','hydromet','watershed', 'visualizations and summaries'] 
rtype = 'genericresource'

# create the new resource
resource_id = hs.createHydroShareResource(abstract, 
                                          title,
                                          keywords=keywords, 
                                          resource_type=rtype, 
                                          content_files=files, 
                                          public=False)