Chris Holden (ceholden@gmail.com) - https://github.com/ceholden
The OGR library is a companion library to GDAL that handles vector data capabilities, including information queryies, file conversions, rasterization of polygon features, polygonization of raster features, and much more. It handles popular formats including the ESRI Shapefile, Keyhole Markup Language, PostGIS, and SpatiaLite. For more information on how OGR came about and how it relates to GDAL, see here: http://trac.osgeo.org/gdal/wiki/FAQGeneral#WhatisthisOGRstuff.
The authors of GDAL/OGR provide a tutorial for the OGR library here.
Note: As of 08/12/2014, the API used in this tutorial seems to be ahead of the current 1.11.0 release. Specifically, they demonstrate how to open a vector file using
gdal.OpenEx
, which is a change designed to unify the GDAL and OGR sections of the library.A clone of the GDAL 1.9 API tutorial can be found here
In this chapter we will use an ESRI Shapefile that contains training data I collected in QGIS for the example image we've been working on.
Just like GDAL, OGR abstracts the file formats so that we can use the same code for any format. It employs the same concept of a dataset object which we can gather information from:
In [1]:
# Import Python 3 print function
from __future__ import print_function
# Import OGR -
from osgeo import ogr
# Open the dataset from the file
dataset = ogr.Open('../example/training_data.shp')
# Make sure the dataset exists -- it would be None if we couldn't open it
if not dataset:
print('Error: could not open dataset')
With our Shapefile read in, we can look at some of its properties:
In [2]:
### Let's get the driver from this file
driver = dataset.GetDriver()
print('Dataset driver is: {n}\n'.format(n=driver.name))
### How many layers are contained in this Shapefile?
layer_count = dataset.GetLayerCount()
print('The shapefile has {n} layer(s)\n'.format(n=layer_count))
### What is the name of the 1 layer?
layer = dataset.GetLayerByIndex(0)
print('The layer is named: {n}\n'.format(n=layer.GetName()))
### What is the layer's geometry? is it a point? a polyline? a polygon?
# First read in the geometry - but this is the enumerated type's value
geometry = layer.GetGeomType()
# So we need to translate it to the name of the enum
geometry_name = ogr.GeometryTypeToName(geometry)
print("The layer's geometry is: {geom}\n".format(geom=geometry_name))
### What is the layer's projection?
# Get the spatial reference
spatial_ref = layer.GetSpatialRef()
# Export this spatial reference to something we can read... like the Proj4
proj4 = spatial_ref.ExportToProj4()
print('Layer projection is: {proj4}\n'.format(proj4=proj4))
### How many features are in the layer?
feature_count = layer.GetFeatureCount()
print('Layer has {n} features\n'.format(n=feature_count))
### How many fields are in the shapefile, and what are their names?
# First we need to capture the layer definition
defn = layer.GetLayerDefn()
# How many fields
field_count = defn.GetFieldCount()
print('Layer has {n} fields'.format(n=field_count))
# What are their names?
print('Their names are: ')
for i in range(field_count):
field_defn = defn.GetFieldDefn(i)
print('\t{name} - {datatype}'.format(name=field_defn.GetName(),
datatype=field_defn.GetTypeName()))
The shapefile is already projected in the same projection that our example raster image is in, so we won't be needing to reproject it. You could, however, do so using either the ogr2ogr command line application, or by reprojecting the shapefile in Python.
The training data we just opened contains two fields:
Combined with the innate location information of polygons in a Shapefile, fields resemble all that we need to use for pairing labels (i.e., the integer ID and the string description) with the information in our raster.
However, in order to pair up our vector data with our raster pixels, we will need a way of co-aligning the datasets in space.
One (complicated) way of doing this would be to manually loop through each polygon in our vector layer and determine which pixels from our raster are contained within. This approach is exactly what GIS softwares (e.g., ENVI, ArcGIS, QGIS) do when doing pairing rasters with vectors, like when doing zonal statistics.
Another less complicated way would be to use the concept of a Region of Interest (ROI) image where each non-zero pixel value in our ROI image corresponds to a raster representation of a polygon from our vector layer. In the example of our training data, most of the values would be 0 in the rasterized representation because our training data samples are small compared to the entire study area. Because we have assigned an integer ID field to each polygon, we could use these integers to store information about which polygons belong to which pixels. In this case, I have assigned values ranging from 1 - 5 for the classes:
To accomplish this rasterization of a vector layer, we could use the GDAL command line utility gdal_rasterize, but we can stick to pure Python by using the GDAL function gdal.RasterizeLayer.
I can't seem to get the pure Python code to work 100% of the time across Python installations (e.g., it works on my desktop and notebook, but not on Wakari) and every code example on the web leads me to believe I'm doing it correctly.
Since I can't figure out why this is occurring - because Python just crashes - I will first provide the
gdal_rasterize
command line that we can use to get the same resultsIn the following example, I write "%%bash" on the first line to indicate that I am using the Bash shell (you could use any shell you prefer). If you really need to execute everything from Python, then you could use the same
gdal_rasterize
command by invoking the shell via the subprocess module.
Note:
If you're running this notebook using Wakari, then the GDAL utilities won't be in your PATH. I have one block of code which will append the location GDAL utilities are installed within to your PATH. The code is
# First I need to check if GDAL command line utilities are available
command -v "gdalinfo" > /dev/null || {
echo "I require GDAL command line utilities. Adding Wakari location to PATH just incase we're on Wakari:"
export PATH=/opt/anaconda/envs/np18py27-1.9/bin/:$PATH
}
First thing we need to do is to figure out what the spatial extent and the pixel size of our output raster. To do this, I will use gdalinfo:
In [3]:
%%bash
# Remember -- "%bash" as seen above just indicates to the IPython notebook that I'm now writing in Bash
# First I need to check if GDAL command line utilities are available
command -v "gdalinfo" > /dev/null || {
echo "I require GDAL command line utilities. Adding Wakari location to PATH just incase we're on Wakari:"
export PATH=/opt/anaconda/envs/np18py27-1.9/bin/:$PATH
}
# Print out metadata about raster -- we include "-proj4" to print out the Proj4 projection string
gdalinfo -proj4 ../example/LE70220491999322EDC01_stack.gtif
Wow - how informative!
We will use the information about the Upper Left and Lower Right coordinates:
Upper Left ( 462405.000, 1741815.000) ( 93d21' 3.44"W, 15d45'16.33"N)
Lower Right ( 469905.000, 1734315.000) ( 93d16'51.06"W, 15d41'12.60"N)
This tells us that our Upper Left X/Y and Lower Right X/Y are "462405, 1741815, 469905, 1734315". We can also see that our Landsat pixels are 30x30m.
The projection is UTM15N, and the projection string is '+proj=utm +zone=15 +datum=WGS84 +units=m +no_defs '
We will need this information for gdal_rasterize
. We can print the usage of gdal_rasterize
as follows:
In [4]:
%%bash
# First I need to check if GDAL command line utilities are available
command -v "gdalinfo" > /dev/null || {
echo "I require GDAL command line utilities. Adding Wakari location to PATH just incase we're on Wakari:"
export PATH=/opt/anaconda/envs/np18py27-1.9/bin/:$PATH
}
# Print out the usage
gdal_rasterize --help
For better descriptions, see the documentation page here.
Now let's run the command -- note that we need to rearrange the Upper Left and Lower Right X/Y pairs to be in the "xmin ymin xmax ymax":
In [5]:
%%bash
# First I need to check if GDAL command line utilities are available
command -v "gdalinfo" > /dev/null || {
echo "I require GDAL command line utilities. Adding Wakari location to PATH just incase we're on Wakari:"
export PATH=/opt/anaconda/envs/np18py27-1.9/bin/:$PATH
}
# Explanation of switches:
# -a ==> write values from the"id" attribute of the shapefile
# -layer ==> the layer name of our shapefile
# -of ==> Output raster file format
# -a_srs ==> output spatial reference system string
# -a_nodata ==> NODATA value for output raster
# -te ==> target extent which matches the raster we want to create the ROI image for
# -tr ==> target resolution, 30 x 30m
# -ot Byte ==> Since we only have values 0 - 5, a Byte datatype is enough
gdal_rasterize -a "id" \
-l training_data \
-of "GTiff" \
-a_srs "+proj=utm +zone=15 +datum=WGS84 +units=m +no_defs" \
-a_nodata 0 \
-te 462405 1734315 469905 1741815 \
-tr 30 30 \
-ot Byte \
../example/training_data.shp ../example/training_data.gtif
In a lot of ways the command line version is easier than programming it using the Python bindings to GDAL's API. Continue on for an example using this second method:
In [8]:
# Import GDAL
from osgeo import gdal
# First we will open our raster image, to understand how we will want to rasterize our vector
raster_ds = gdal.Open('../example/LE70220491999322EDC01_stack.gtif', gdal.GA_ReadOnly)
# Fetch number of rows and columns
ncol = raster_ds.RasterXSize
nrow = raster_ds.RasterYSize
# Fetch projection and extent
proj = raster_ds.GetProjectionRef()
ext = raster_ds.GetGeoTransform()
raster_ds = None
# Create the raster dataset
memory_driver = gdal.GetDriverByName('GTiff')
out_raster_ds = memory_driver.Create('../example/training_data.gtif', ncol, nrow, 1, gdal.GDT_Byte)
# Set the ROI image's projection and extent to our input raster's projection and extent
out_raster_ds.SetProjection(proj)
out_raster_ds.SetGeoTransform(ext)
# Fill our output band with the 0 blank, no class label, value
b = out_raster_ds.GetRasterBand(1)
b.Fill(0)
# Rasterize the shapefile layer to our new dataset
status = gdal.RasterizeLayer(out_raster_ds, # output to our new dataset
[1], # output to our new dataset's first band
layer, # rasterize this layer
None, None, # don't worry about transformations since we're in same projection
[0], # burn value 0
['ALL_TOUCHED=TRUE', # rasterize all pixels touched by polygons
'ATTRIBUTE=id'] # put raster values according to the 'id' field values
)
# Close dataset
out_raster_ds = None
if status != 0:
print("I don't think it worked...")
else:
print("Success")
Now that we have a working method, we can check how many pixels of training data we collected:
In [34]:
# Import NumPy for some statistics
import numpy as np
roi_ds = gdal.Open('../example/training_data.gtif', gdal.GA_ReadOnly)
roi = roi_ds.GetRasterBand(1).ReadAsArray()
# How many pixels are in each class?
classes = np.unique(roi)
for c in classes:
print('Class {c} contains {n} pixels'.format(c=c,
n=(roi == c).sum()))
Now that we have our ROI image, we can proceed to use it for pairing our labeled polygons with the matching pixels in our Landsat image to train a classifier for image classification. We continue this step in the next chapter.