In [ ]:
import warnings; warnings.simplefilter('ignore')
SLIP is used to automate the detection of Landslides. A SLIP product is the result of filtering based on per-pixel changes in both soil moisture and vegetation in areas with high elevation gradients. All of which (with the exception of elevation gradients) can be computed using simple bandmath equations.
SLIP makes use of the following Landsat 7 Surface Reflectance Bands:
SLIP makes use of the following ASTER GDEM V2 bands:
Algorithmically speaking, SLIP is a series of per-pixel filter operations acting on relationships between NEW(current) and BASELINE(historical) values of an area. The remaining pixels after filter operations will be what SLIP classifies as landslides. Itemized in the list below are operations taken to create a SLIP product:
The following code connects to the datacube and accepts 'SLIP' as an app-name.
In [ ]:
import datacube
dc = datacube.Datacube(app = 'SLIP')
In [ ]:
from utils.data_cube_utilities.dc_display_map import display_map
display_map(lat, lon)
In [ ]:
import datetime
# Freetown, Sierra Leone
# (https://www.reuters.com/article/us-leone-mudslide-africa/cities-across-africa-face-threat-of-landslides-like-sierra-leone-idUSKCN1AY115)
# define geographic boundaries in (min, max) format
lon = (-13.3196,-12.9366)
lat = (8.1121,8.5194)
# define date range boundaries in (min,max) format
# There should be a landslide by Freetown during August 2017.
date_range =("2017-06-01", "2017-10-31")
# define product and platform to specify our intent to load Landsat 7 sr products
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
# define desired bands. For SLIP, only red,nir,swir and cf_mask will be necessary.
desired_bands = ['red','nir','swir1','pixel_qa']
#add blue and green bands since they're needed for visualizing results(RGB)
desired_bands = desired_bands + ['green', 'blue']
# load area. Should result in approximately 15 acquisitions between 2014 and 2016
landsat = dc.load(product = product,\
platform = platform,\
lat = lat,\
lon = lon,\
time = date_range,\
measurements = desired_bands)
In [ ]:
from utils.data_cube_utilities.dc_mosaic import ls7_unpack_qa
from utils.data_cube_utilities.dc_displayutil import display_at_time
import numpy as np
import xarray as xr
import pandas as pd
import matplotlib.pyplot as plt
clear_xarray = ls7_unpack_qa(landsat.pixel_qa, "clear")
water_xarray = ls7_unpack_qa(landsat.pixel_qa, "water")
clean_mask = np.logical_or(clear_xarray.values.astype(bool),
water_xarray.values.astype(bool))
In [ ]:
# Filter in favor of clear pixels.
landsat = landsat.where(clean_mask)
#Filter out negative values. (This includes noise and no-data values of -9999)
landsat = landsat.where(landsat >= 0)
The code above applies True
or False
values globally across all bands to any (time,lat,lon) coordinate pairs that do not have a clear view of land
or water
. These values are then used to filter out coordinate pairs that cannot be used for analysis.
In [ ]:
acq_to_show = landsat.sel(time=pd.to_datetime("2015-08-31", format="%Y-%m-%d"),
method='nearest')
rgb_da = acq_to_show[['red', 'green', 'blue']].to_array()
vmin = rgb_da.quantile(0.05).values
vmax = rgb_da.quantile(0.95).values
rgb_da.plot.imshow(vmin=vmin, vmax=vmax)
Notice that there are lines stretching across the figure above. This is the result of a scanline corrector (SLC) malfunction on LANDSAT 7.
In [ ]:
new = landsat
However, OLD can have varying interpretations.
In SLIP, OLD values (referred to in code as BASELINE values) are simply rolling averages of not-nan values leading up to the date in question.
The following figure illustrates such a compositing method:
In the figure above, t4 values are the average of t1-t3 (assuming a window size of 3)
The code below composites with a window size of 5.
In [ ]:
landsat
In [ ]:
from utils.data_cube_utilities.dc_baseline import generate_baseline
#Generates a moving average of n values leading up to current time
baseline = generate_baseline(landsat, composite_size = 5, mode = 'average')
It is important to note that compositing will shorten the length of baseline
's time domain by the window size since ranges less than the composite size are not computed. For a composite size of 5, new
's first 5 time values will not have composite values.
In [ ]:
(len(new.time), len(baseline.time))
In [ ]:
display_at_time([baseline, new], time = datetime.datetime(2017,8,31), width = 2, w = 12)
The baseline composite is featured in the figure above (left). It represents what was typical for the past five acquisitions 'leading-up-to' (2015,8,31). Displayed next to it (right) is the true-color visualization of the acquisition 'at' (2015,8,31). The new
object contains unaltered LS7 scenes that are index-able using a date like (2015,8,31). The baseline
object contains a block of composites of those landsat scenes that is index-able the same way.
SLIP makes the major assumption that landslides will strip a hill/mountain-side of all of its vegetation.
SLIP uses NDWI, an index used to monitor water content of leaves, to track the existence of vegetation on a slope. At high enough levels, leaf water content change can no longer be attributed to something like seasonal fluctuations and will most likely indicate a change in the existence of vegetation.
NDWI is computed on a per-pixel level and involves arithmetic between NIR (Near infrared) and SWIR1 (Short Wave Infrared) values. NDWI is computed for both NEW and BASELINE imagery then compared to yield NDWI change. The equations bellow detail a very simple derivation of change in NDWI:
$$ NDWI_{BASELINE} = \frac{NIR_{BASELINE} - SWIR_{BASELINE}}{NIR_{BASELINE} + SWIR_{BASELINE}}$$
$$\Delta NDWI = NDWI_{NEW} - NDWI_{BASELINE}$$
The code is just as simple:
In [ ]:
ndwi_new = (new.nir- new.swir1)/(new.nir + new.swir1)
ndwi_baseline = (baseline.nir - baseline.swir1)/ (baseline.nir + baseline.swir1)
ndwi_change = ndwi_new - ndwi_baseline
In the context of code, you can best think of filtering as a peicewise transformation that assigns a nan
(or null) value to points that fall below our minimum change threshold. (For SLIP that threshold is 20%)
In code, it's even simpler:
In [ ]:
new_ndwi_filtered = new.where(abs(ndwi_change) > 0.2)
In [ ]:
display_at_time([new, (new, new_ndwi_filtered),new_ndwi_filtered], time = datetime.datetime(2017,8,31), width = 3, w =14)
Highlighted in the center picture are values that meet our NDWI change expectations. Featured in the right-most image is what remains of our original image after NDWI filtering.
SLIP makes another important assumption about Landslides. On top of stripping the Slope of vegetation, a landslide will reveal a large layer of previously vegetated soil. Since soil reflects more light in the RED spectral band than highly vegetated areas do, SLIP looks for increases in the RED bands. This captures both the loss of vegetation, and the unearthing of soil.
Red change is computed on a per-pixel level and involves arithmetic on the RED band values. The derivation of RED change is simple:
The code is just as simple:
In [ ]:
red_change = (new.red - baseline.red)/(baseline.red)
Filtering RED reflectance change is just like the piecewise transformation used for filtering NDWI change.
$$ red\_filter(Dataset) = \left\{
\begin{array}{lr}
Dataset & : \Delta red(Dataset) > 0.4\\
np.nan & : \Delta red(Dataset) \le 0.4
\end{array}
\right.\\ $$
In Code:
In [ ]:
new_red_and_ndwi_filtered = new_ndwi_filtered.where(red_change > 0.4)
In [ ]:
display_at_time([new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered], time = datetime.datetime(2015,8,31), width = 3, w = 14)
Aster GDEM models provide elevation data for each pixel expressed in meters. For SLIP height is not enough to determine that a landslide can happen on a pixel. SLIP focuses on areas with high elevation Gradients/Slope (Expressed in non-radian degrees).The driving motivation for using slope based filtering is that landslides are less likely to happen in flat regions.
In [ ]:
aster = dc.load(product="terra_aster_gdm",\
lat=lat,\
lon=lon,\
measurements=[ 'dem'])
A gradient is generated for each pixel using the four pixels adjacent to it, as well as a rise/run formuala.
$$ Gradient = \frac{Rise}{Run} $$
Basic trigonometric identities can then be used to derive the angle:
When deriving the angle of elevation for a pixel, two gradients are available. One formed by the bottom pixel and top pixel, the other formed by the right and left pixel. For the purposes of identifying landslide causing slopes, the greatest of the two slopes will be used.
The following image describes the process for angle-of-elevation calculation for a single pixel within a grid of DEM pixels
The vagaries of implementation have been abstracted away by dc_demutils
. It's used to derive a slope-mask. A slope-mask in this sense, is an array of true
and false
values based on whether or not that pixel meets a minimum angle of elevation requirement. Its use is detailed below.
In [ ]:
from utils.data_cube_utilities.dc_slip import create_slope_mask
# Create a slope-mask. False: if pixel <15 degees; True: if pixel > 15 degrees;
is_above_slope_threshold = create_slope_mask(aster, degree_threshold = 15,resolution = 30)
Filtering based on slope is a peicewise transformation using a derived slopemask:
Its use in code:
In [ ]:
slip_product = new_red_and_ndwi_filtered.where(is_above_slope_threshold)
The final results of SLIP are small regions of points with a high likelihood of landslides having occurred on them. Furthermore there is no possibility that detections are made in flat areas(areas with less than a $15^{\circ}$ angle of elevation.
In [ ]:
display_at_time([new, (new, slip_product),slip_product], time = datetime.datetime(2015,8,31), width = 3, w = 14)
In [ ]:
display_at_time([new, (new,new_ndwi_filtered),new_ndwi_filtered,new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered, new, (new, slip_product),slip_product], time = datetime.datetime(2015,8,31), width = 3, w = 14, h = 12)
In [ ]:
display_at_time([baseline, (new,slip_product)], time = datetime.datetime(2015,8,31), width = 2, mode = 'blend', color = [210,7,7] , w = 14)
Based on the results displayed above, it appears that SLIP has successfully classified major changes in imagery, and possibly, even detected a landslide.