Purpose of this notebook: Introduce you to:
At the end of this lesson, you should be able to read some related hypsometry files, and:
A basic temperature index model will use inputs:
and calculate melt for each date and elevation as:
$Melt (km^3) = 0, if T <= 0^oC$
$Melt (km^3) = \frac { A_{snow}(km^2) \text{ } T(^oC) \text{ } DDF_{snow}(mm\text{ }day^{-1}\text{ }^oC^{-1}) } {10^6(mm\text{ }km^{-1})}, if T > 0^oC $
So, we can read SCA data by date and elevation and Temperature data by date and elevation, and do calculations in the DataFrame space.
For a range of measured DDFs at various global locations, see Table 1 of:
Ref. Hock, R. (2003). Temperature index melt modelling in mountain areas. Journal of Hydrology, 282, 104-115.
In [ ]:
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from imp import reload
pd.set_option('display.max_rows', 10)
pd.set_option('display.max_columns', 90)
pd.set_option('display.width', 200)
I have prepared sample SCA data for the Hunza sub-basin of the UIB, and some of Andy Barrett's downscaled temperature data for the same subbasin.
In [ ]:
import hypsometry
reload(hypsometry)
sca_filename = "modscag_gf_by_year/v01/IN_Hunza_at_Danyour.0100m.modscag_gf_covered_area_by_elev.day.2001to2001.v2.asc"
temperature_filename = "forcing_data/ERA-Interim/air_temperature/derived/hunza_era_interim_t2m_profile_100m.2001.dscale.asc"
sca = hypsometry.Hypsometry()
sca.read( sca_filename, verbose=True )
temp = hypsometry.Hypsometry()
temp.read( temperature_filename, verbose=True )
In [ ]:
print( sca.data.index )
print( temp.data.index )
In [ ]:
sca.data.columns
In [ ]:
temp.data.columns
Note that the columns are not quite the same ('1400.' vs. '1450'). Andy labelled his elevation contours with the middle value of the 100 m band, and I labelled my data with a floating-point representation. Both are Indices with strings. We want the columns to be the same, so that we can do operations on the Data Frames and have the cells match up. Here are a couple ways to rename DataFrame columns.
The DataFrame "rename" function lets you rename columns or index values by defining an "anonymous" function lambda to operate on each value in turn. If you leave off the copy=True, it won't change the original DataFrame.
In [ ]:
sca.data = sca.data.rename( columns=lambda x: int( float( x ) ),
copy=True )
temp.data = temp.data.rename( columns=lambda x: int( x ) - 50,
copy=True )
print( sca.data.columns )
print( temp.data.columns )
Now the columns and index values match up:
In [ ]:
sca.data
In [ ]:
temp.data
In [ ]:
temp.data = temp.data - 273.15
temp.data
In [ ]:
snow_ddf = 7.0 # mm per day per deg C
mm_per_km = 10.0 ** 6.
melt = hypsometry.Hypsometry()
melt.data = sca.data * temp.data * snow_ddf / mm_per_km
melt.data
In [ ]:
total_melt = melt.data_by_doy()
total_melt
Which isn't quite what we want, since we operated on all values, even when temperature was too cold for melt to occur. Here is one way to do it (python "masked arrays" might be a better to get this done.)
In [ ]:
melt = hypsometry.Hypsometry()
melt.data = sca.data * temp.data[ temp.data > 0. ] * snow_ddf / mm_per_km
melt.data[ np.isnan( melt.data ) ] = 0.
melt.data
In the previous cell, I took advantage of a boolean array to limit the operations to indices where the conditions are what I want. Here is a small example to show how this works:
In [ ]:
#x = pd.DataFrame( data=[ [ 1, 2 ], [ 3, 4 ] ] )
#y = pd.DataFrame( data=[ [ 2, 2 ], [ 3, 3 ] ] )
#y > 2
#prod = x * y[ y > 2 ]
#x
In [ ]:
#y
In [ ]:
#prod
In [ ]:
#prod[ np.isnan(prod) ] = 0.
#prod
In [ ]:
total_melt = melt.data_by_doy()
total_melt
And now plot the time series of daily melt:
In [ ]:
fig, ax = plt.subplots(1,1)
ax.set_title( "Simple temperature index model melt, Hunza, 2001" )
ax.set_ylabel('Melt ($km^3$)')
total_melt.plot()
One goal of the CHARIS project is to understand the proportion of snow melt vs. ice melt, so we can define a more complicated melt model.
We will use these inputs, for the Hunza subbasin of the UIB:
We can define a more complicated model:
For each date/elevation band:
$Melt (km^3) = 0, if T <= 0^oC$
otherwise, ($T > 0^oC$):
then if SCA area > MODICE area, then melt SCA area using $DDF_{snow}$:
$Melt (km^3) = \frac { A_{snow}(km^2) \text{ } T(^oC) \text{ } DDF_{snow}(mm\text{ }day^{-1}\text{ }^oC^{-1}) } {10^6(mm\text{ }km^{-1})}$
otherwise, melt MODICE area using $DDF_{ice}$:
$Melt (km^3) = \frac { A_{ice}(km^2) \text{ } T(^oC) \text{ } DDF_{ice}(mm\text{ }day^{-1}\text{ }^oC^{-1}) } {10^6(mm\text{ }km^{-1})}$
We already have SCA and temperature data, so read modice, (and check/adjust the column names):
In [ ]:
import hypsometry
reload(hypsometry)
modice_filename = "modice_area_by_elev/IN_Hunza_at_Danyour.0100m.modicev03_area_by_elev.txt"
modice = hypsometry.Hypsometry()
modice.read( modice_filename, verbose=True)
In [ ]:
modice.data = modice.data.rename( columns=lambda x: int( float( x ) ),
copy=True )
modice.data.columns
So we have modice for just one row, and we want to compare it to each row of the snow-covered area data, that is dates (rows) by elevations. In order to do the math in the DataFrame space, I need to make a modice array with:
In [ ]:
daily_modice = []
for row in range( len( sca.data.index ) ):
daily_modice.append( modice.data.ix[0].values )
modice.data = pd.DataFrame( data=daily_modice, index=sca.data.index, columns=sca.data.columns )
modice.data
Now calculate the snow melt part, (note the logic to only calculate the values we want):
In [ ]:
snow_ddf = 7. # mm per day per deg C
snow_melt = hypsometry.Hypsometry()
snow_melt.data = sca.data[ (sca.data > modice.data) & ( temp.data > 0.) ] * temp.data * snow_ddf / mm_per_km
snow_melt.data[ np.isnan( snow_melt.data ) ] = 0.
snow_melt.comments.append( "Melt from snow, at " + str( snow_ddf ) + " mm/day/degC" )
snow_melt.data
In [ ]:
ice_ddf = 9. # mm per day per deg C
ice_melt = hypsometry.Hypsometry()
ice_melt.data = modice.data[ (sca.data <= modice.data) & ( temp.data > 0.) ] * temp.data * ice_ddf / mm_per_km
ice_melt.data[ np.isnan( ice_melt.data ) ] = 0.
ice_melt.comments.append( "Melt from ice, at " + str( ice_ddf ) + " mm/day/degC" )
ice_melt.data
In [ ]:
total_snow_melt = snow_melt.data_by_doy()
fig, ax = plt.subplots(1,1)
ax.set_title( "Snow/Ice temperature index model melt from snow, Hunza, 2001" )
ax.set_ylabel('Melt ($km^3$)')
total_snow_melt.plot()
(Remember to overplot the discharge data.)
In [ ]:
total_ice_melt = ice_melt.data_by_doy()
fig, ax = plt.subplots(1,1)
ax.set_title( "Snow/Ice temperature index model melt from ice, Hunza, 2001" )
ax.set_ylabel('Melt ($km^3$)')
total_ice_melt.plot()
In [ ]:
fig, ax = plt.subplots(1,1)
ax.set_title( "Snow/Ice temperature index model melt, Hunza, 2001" )
ax.set_ylabel('Melt ($km^3$)')
total_snow_melt.plot( label="Snow Melt" )
total_ice_melt.plot( label="Ice Melt", style="r" )
ax.legend( loc="best" )
Last, we might want to include discharge data for Dainyour Bridge on the same plot:
In [ ]:
import wapda_discharge
reload( wapda_discharge )
discharge_filename = "discharge_by_day/IN_Hunza_at_Danyour.daily_discharge.2001.txt"
discharge = wapda_discharge.Discharge()
discharge.read( discharge_filename, verbose=True )
In [ ]:
discharge.data
In [ ]:
fig, ax = plt.subplots(1,1)
ax.set_title( "Modelled melt vs. Measured discharge, Hunza at Danyour Bridge, 2001" )
ax.set_ylabel('Melt ($km^3$)')
total_snow_melt.plot( label="Snow Melt", figsize=(12,8) )
total_ice_melt.plot( label="Ice Melt", style="r" )
discharge.data['Discharge'].plot( label="Measured daily discharge", style='ko' )
ax.legend( loc="best" )
Exercise: Make a plot of modelled melt from total (snow + ice) vs. measured discharge.
In [ ]: