Building Networks for Wetland Connectivity

We’ve been playing with ways to construct networks according to graph theory, ultimately using the R package igraph to investigate wetland connectivity. I can’t reveal too much here because it’s my current research in progress! ūüôā The part of the process I’m currently trying to make more efficient is how to calculate distances for each wetland, to each wetland within a certain distance threshold. Ways I can go…

  • Make the distance matrix a smaller object. Instead of storing floating point numbers, I can store integers representing our “distance bins” of interest. This might keep it from crashing, but the downside is it will probably still take the same (actually more) time. The time could be sped up with more memory free in the workspace, but it still seems a waste to calculate & consider all these distances that we don’t necessarily need.
  • Create new spatial objects that represent minimum polygons around my networks, do distance analysis on these. My concern is that the “new shape” will be too imprecise, and it will skew our results (or I’ll have to use the computational power anyway to confirm a distance calculation/figure out what the nearest wetland in the shape is). I’m wondering if this will create new borders that are within 5k.
  • Create an adjacency matrix for the polygons, and then exclude neighbors from consideration as they’re lumped into networks

Some Processing I’ve Done in GDAL

I have GDAL installed in Linux (i.e. the easiest way to install/implement it) so the following examples represent command line usage. I have used a smattering of different GDAL utilities, and the links in the descriptions go to the manual page for the utility in each example. I have incorporated these example commands into various bash scripts if I need to implement them in an iteration over multiple files.

This is an example of re-sampling a raster to larger pixel sizes (in this case to a lower resolution of 0.09 degree from the original 0.0002 degree [30m LANDSAT pixels]) by taking the mean of the pixels encompassed in the coarser output.

gdalwarp -tr 0.09 0.09 -r average -ot Float32 file.tif fileout.tif
Rplot
This is the output of the above command, where a 30m resolution image was scaled up to 10km resolution image. The new pixel values are the averages of the binary raster pixels (0,1) that contributed to them. All maps are plotted in R 3.4.1

I have used GDAL to set no data values (in this case, I reclassified 0 as the no data value).

gdal_translate -of GTiff -a_nodata 0 PPR/2000/Playas_2000.tif region_img/Playas.tif

Here’s an example of stitching 2 tiff’s together into a larger area, and setting the output no data value to be where the 0’s were in the original input files.

gdal_merge.py -o region_img/Canada.tif -a_nodata 0 PPR/2000/Alberta_PPR_2000.tif PPR/2000/SAK_MAN_PPR_2000.tif
Rplot01
Notice blue is the only color, representing the only value left in the raster (1).

If you want to stitch shape files together into a new file, you have to initialize the new shape file first with one of your input files and then add to it.

ogr2ogr regions/Canada.shp shapefiles/Alberta_PPR_2001.shp
ogr2ogr -update -append regions/Canada.shp shapefiles/SAK_MAN_PPR_2001.shp -nln Canada

If you’re going to, say, turn your raster into polygons, you can get rid of clumps below a certain threshold before doing so (in this case, I’m getting rid of single pixels in my unary raster when using an 8-neighbor clumping rule).

gdal_sieve.py -st 2 -8 file.tif

Then, I can make my polygon layer, simplified. In the 2nd line, I project the shapefile to Albers Equal Area.

gdal_polygonize.py -8 file.tif -f "ESRI Shapefile" shapefile/file.shp 
ogr2ogr -f "ESRI Shapefile" -progress outfile.shp shapefile/file.shp -t_srs "EPSG:5070"
Rplot02
Here’s a shape file of wetlands created from the TIFF of the Canadian Prairie Pothole Region!

Misadventures in Open Source GIS

When most ecologists conceptualize “GIS” we often think of a desktop GIS program, most likely ArcGIS. When we want to not use ArcGIS (or can’t), we most likely say “is there an R package for that?” In many cases, there is, and the repository keeps growing! The ever-growing number of R spatial packages reflects the general trend in GIS right now: it’s booming in lateral growth. Everywhere you turn, there’s something new, and actually my motivation for writing this post is just to try to keep up.

Going back “old school,” perhaps the most famous and long-running (i.e. older than I am) open source program is GRASS GIS. Here’s where the open source movement GIS movement fails, though: the point-and-click functionality and friendliness of interfacing with the program stops before you even get to the gate. If I go to the Ubuntu software center, it points me to an outdated version of the program that won’t even launch, so I had to remove it before installing the newer version (where the version number is in the command name). Luckily, though, it points me to a broader repository whereby I can access the “Ubuntu GIS” suite.

# Add Ubuntu Unstable PPA when running LTS Ubuntu release
sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable

The real problem I’ve had with GRASS from the beginning, though, is the beginning: you can’t just open the program and try to open some spatial data like you can in ArcGIS. You have to start by selecting a home folder, defining a “location” which must have uniform spatial reference, and then making a map set with spatial reference info. It’s where I admittedly gave up trying with GRASS long ago, in favor of the ease of just launching ArcGIS and adding a shape file.

QGIS solves more of this problem, but there are again some hangups on the install: it’s not in the Ubuntu software center, and installing it “the old fashioned way” seems to get an older version of the program (and the terminal errors will tell you so if you launch it from the terminal).

I’m thinking about finally trying to learn PostgreSQL in conjunction with PostGIS. I’m trying to retrace my thoughts on that to last year, and I think it was because I found out that SQL was a solid way to build my own GIS tools. I remember settling on that PostGIS was probably my best bet to learn open source GIS, and ironically I just saw a post-doc advertisement that specifically requested PostGIS knowledge.

Want to keep up? I compiled a certainly-not-exhaustive-but-pretty-comprehensive list of people I could find tweeting about open source GIS

Let me know if you should be on that list!

Extracting NetCDF Values to a Shape File

Here’s a script that loops over climate NetCDF bricks in a folder and extracts the values for each layer in the brick of each file, in this case averaged over polygons in a shape file.

rm(list = ls())
library(raster)
library(rgdal)
library(ncdf4)
library(reshape2)
library(stringr)
library(data.table)
setwd("where your climate files are")
your_shapefile <- readOGR("path to your shapefile","the layer name for your shapefile")

climate <- rbindlist(lapply(list.files(pattern="nc"), function(climate_file)
{
#in my case, the weather variable is in the filename
climate_var = ifelse(grepl("pr",climate_file),"pr",ifelse(grepl("tmax",climate_file),"tmax","tmin"))
print(climate_var)
climate_variable <- brick(climate_file,varname=climate_var,stopIfNotEqualSpaced=FALSE)
#I needed to shift the x-axis to be on a -180 to 180 scale
extent(climate_variable)=c(xmin(climate_variable)-360, xmax(climate_variable)-360, ymin(climate_variable), ymax(climate_variable))
blocks <- spTransform(blocks,projection(climate_variable))
r.vals <- extract(climate_variable, blocks, fun=mean,na.rm=TRUE,df=TRUE,layer=1,nl=1680)
r.vals <- melt(r.vals, id.vars = c("ID"),variable.name = "date")
r.vals$climate <- climate_var
#depending on the filename convention of your files, get a variable for your climate model and name it "modelname"
r.vals$model <- modelname
climate_variables[[length(climate_variables)+1]] <- r.vals
}))

climate <- rbindlist(climate_variables)

At this point, you have a data frame called “climate” that has all your data long-form. You can cast this as you see appropriate depending on your needs!

How This Problematic NetCDF File Was Fixed

I can’t be thankful enough for my godsend Twitter friend/NetCDF guru Michael Sumner for coming in clutch to rectify this problematic file! As I take my baby steps in learning how to deal with NetCDF files (~3 weeks after being thrown into the “deep end” and trying to tread water), I realized that one of our files had some sort of problem. Relevant to this issue, NetCDF files are often (not always) multidimensional arrays, where 2 of the dimensions are coordinates. The variables can be defined in relation to the dimensions of the file. So for instance, in my case, temperature is described over space and time (4-dimensional array), and each temperature value corresponds to a location and time. So, each temperature value “goes along with” a set of coordinates and a time. When I imported the NetCDF file as a raster brick (which means I put the array data into a stack of grids), the metadata seemed to show a larger longitudinal “step” than latitudinal, the latter of which was at the correct resolution. Note that this means, as is common, I took what is really a point (a temperature value corresponding to a latitude, longitude and time) and turned it into a grid, where the original point is the center of the resulting cell that gets the value. I tried ¬†plotting it to see what it looked like.

Rplot01
This first plot hints at the real problem: it appears the image has been grabbed at either end and stretched too wide

I tried to crop the image to only the extent I needed but it still was messed up.

I think those are the Great Lakes erroneously stretched into the study area

I took this at face value, that perhaps the grid was somehow “wrong,” but my new friend showed me how to look more deeply into the problem! He opened the file in R and extracted the coordinate values, plotting for a visual diagnosis of the problem.

library(ncdf4) con <- ncdf4::nc_open("file.nc")
lon <- ncdf4::ncvar_get(con, "lon")
lat <- ncdf4::ncvar_get(con, "lat")
ncdf4::nc_close(con)

So, my friend suggested plotting the longitude values stored in this dimension of the array.

Rplot

Herein we start to see a problem! Something is off about the longitudes stored in the array.

head(lon)
# [1] 244.0625 244.1875 244.3125 244.4375 244.5625 244.6875
tail(lon)
# [1] 281.8125 281.9375 282.0625 282.1875 3.5200 3.5100
r <- raster::brick(f, stopIfNotEqualSpaced = FALSE)
extent(r)
# class : Extent
# xmin : 3.056128
# xmax : 282.6414
# ymin : 40
# ymax : 52.875
The first longitude values are the correct lowest values for the raster. For whatever reason, the last 2 longitude values got reassigned to the (very wrong) smallest values in the grid, though they should be the largest values. The extent describes the grid that I assigned the points to¬†when I imported the data into R.¬†For comparison and clarification, I also posted the results of looking at the extent of the raster that I created , to show that they’re “2 different things.” Assuming that the NetCDF stores the center point of each cell, the latitude extents (which are correct) provide a half-cell-width buffer to the actual data point, correctly placing them in the center of the cell.
So to correct, he loaded the raster with the following note…
## let's treat it as if the start is correct,
## the final two columns are incorrect
## we hard-code the half-cell left and right cell edge
## (but note the "centre" might not
## be the right interpretation from this model, if we're being exacting)
valid_lon2 <- c(lon[1] - 0.125/2) + c(0, length(lon)) * 0.125
valid_lat <- range(lat) + c(-1, 1) * 0.125/2
r2 <- setExtent(r, extent(valid_lon2, valid_lat))
r_final <- raster::shift(r2, x = -360)
Now that the extent is rectified he shifted it to our desired reference, namely out of 0-360 longitude to -180/180.
Now the Great Lakes are…great!
Voila! Thank you very much, Michael!

Next NetCDF Issue I Discovered Today

Another problem I ran into: apparently Nov-Dec. 1999 monthly averages for HADCM3 B1 precipitation, where I flipped the axes in NCO, are blank.

problem <- brick("hadcm3.b1.pr.NAm.grid_monthly.nc_out.nc")
plot(problem$X1999.11.15)
plot(problem$X1999.12.16)

The above returns the correct axes, but blank plots. The problem is present before I flipped the axes, so I have to go back to the original files before I did the averaging to find out what’s wrong. I wonder if there were blank layers anywhere in the Nov-Dec daily data?

How I’ve Been Processing Climate Data

From a THREDDS server, I’m using the NetcdfSubset portal to obtain a spatial subset of my climate data set of interest. Since the files I need are too big to download from their HTTP server, they instead give me an OPeNDAP URI with the spatial subset info. I then pass that to nccopy from the NetCDF library.

Then, I installed CDO and wrote a script to do the monthly means for all the files: it averages a daily time step file into a monthly averaged (or whatever metric you choose) file.

cdo monmean foo_hourly_or_daily.nc  foo_monthly_mean.nc

Then, I installed nco to flip the axes.

ncpdq -a lat,lon in.nc out.nc

Then, I imported the correctly-oriented files into R.

rm(list = ls())
library(raster)
library(rgdal)
library(ncdf4)
setwd("wherever your files are")
climate <- brick("filename.nc",varname="whatever your climate variable is")

In my case, as I think is common for climate files, longitude was on a 0-360 scale, instead of -180/180.

Spatially Manipulating NetCDF Files

I ended up getting a bunch of climate NetCDF files from a colleague for each combination of climate model, climate change scenario and variable. So, what I have is a list of 3-D files consisting of observations/predictions of a given weather variable over latitude, longitude and time (you can picture them as cubes, if you like). I will need to spatially adjust the files, and then subset the data.

ncpdq -a lat,lon in.nc out.nc

I need to…

  • keep only the extent within a shape file we’re using
  • figure out how to summarize it by the polygons in the shape file
    • average?
    • keep only the center point?

Some of my closest colleagues, Brooke Bateman and Andy Allstadt, had some good advice for how to work with the netCDF files in R once I get them:

  • ncdf4 package
  • raster package: can open netCDF either as…
    • single layer (with the raster function)
    • the entire thing (brick function)
  • SDMTools
    • sample x,y points from raster using extract data function
    • convert the raster to ASCII

You can load those libraries, and then do a few things to take a look.

library(<span class="pl-smi">raster</span>)
print(raster(<span class="pl-s"><span class="pl-pds">"</span>afile.nc<span class="pl-pds">"</span></span>))  <span class="pl-c">## same as 'ncdump -h afile.nc</span>
b <- brick(<span class="pl-s"><span class="pl-pds">"</span>afile.nc<span class="pl-pds">"</span></span>, <span class="pl-v">varname</span> <span class="pl-k">=</span> <span class="pl-s"><span class="pl-pds">"</span>pr<span class="pl-pds">"</span></span>) #this is the variable name internal to the file