Building Networks for Wetland Connectivity

We’ve been playing with ways to construct networks according to graph theory, ultimately using the R package igraph to investigate wetland connectivity. I can’t reveal too much here because it’s my current research in progress! 🙂 The part of the process I’m currently trying to make more efficient is how to calculate distances for each wetland, to each wetland within a certain distance threshold. Ways I can go…

  • Make the distance matrix a smaller object. Instead of storing floating point numbers, I can store integers representing our “distance bins” of interest. This might keep it from crashing, but the downside is it will probably still take the same (actually more) time. The time could be sped up with more memory free in the workspace, but it still seems a waste to calculate & consider all these distances that we don’t necessarily need.
  • Create new spatial objects that represent minimum polygons around my networks, do distance analysis on these. My concern is that the “new shape” will be too imprecise, and it will skew our results (or I’ll have to use the computational power anyway to confirm a distance calculation/figure out what the nearest wetland in the shape is). I’m wondering if this will create new borders that are within 5k.
  • Create an adjacency matrix for the polygons, and then exclude neighbors from consideration as they’re lumped into networks

Autonomous Recording Units for Birds

I finally wrote a post that fits all of my blog categories! 🙂 Years ago, Dr. T. Mitchell Aide visited my former lab and I had an opportunity to meet with him. Hearing about his work with automated classification of bird calls first got my mind churning about how we can use ARU’s for gathering field data (Aide et al. 2013). His work focused on the tropics, and thus the complexity of the animal soundscape (Acevedo et al. 2009). I further became interested as I came across the technique for monitoring my nemesis bird, yellow rail! Also, ARU’s have been tested with respect to the Breeding Bird Survey (BBS) which has been the focus of most of my research to date (Rempel et al. 2013). Soundscape ecology is a relatively new area of research (Pijanowski et al. 2011a) and is considered a branch of my current field, landscape ecology (Pijanowski et al. 2011b).

History

It is important to evaluate ARU’s for avian study, because vocalization accounts for most detection (Acevedo and Villanueva-Rivera 2006). The utility of automated recording units (ARU) for ecology has been investigated for well over a decade, so as anything in ecology methods and technology are always evolving (Haselmayer and Quinn 2000). Thus, there is something of a literary trail as technology has improved, both with respect to recorders and analysis (Haselmayer and Quinn 2000). For example, automated classification was out of reach by the standard of reliability not long ago (Swiston and Mennill 2009). Manual classification seemed to be the only reliable way to identify songs in recordings (Waddle et al. 2009).

Methods

ARU’s have their pros and cons, as well as applicability (Brandes 2008). Recording sounds can provide a less invasive alternative to direct observation, detect hard-to-observe species and sample a large area. For example, an early (and ongoing) application of recording was to monitor nocturnal migration (Farnsworth and Gauthreaux 2004). Additionally, recordings are reviewable, and do not have human listening bias (Digby et al. 2013). This paves the way for standardizing observer effort and capability (Hobson et al. 2002). In comparison to a point count, the visual component is lost, but detection of species by the recording units appears to be relatively high (Alquezar and Machado 2015). Yet, if it is easier to detect a target species visually, ARU’s may not sample them as well as a point count (Celis-Murillo et al. 2012). For at least some species, the best method appears to be to combine point counts and ARU’s for detection (Holmes et al. 2014). While there is now an ever-growing body of literature on the applicability of ARU’s, they are often suited better to some sound qualities over others, species, or certain components of bioacoustics such as temporal patterns (Rognan et al. 2012). ARU’s have now been tested over many different ecosystem types, and results are generally favorable (Venier et al. 2012). However, they may not be able to sample species well that vocalize infrequently and/or are sparsely distributed (Sidie-Slettedahl et al. 2015). There are different configurations of ARU’s to answer different ecological questions (Mennill et al. 2006).

Analysis

Various indices aid in interpreting recordings (Towsey et al. 2014a). There is an R package “soundecology” that now calculates a number of indices from recordings!

  • canonical discriminant analysis (CDA): identifying individuals (Rognan et al. 2009)
  • Acoustic Complexity Index: proxy for species richness (Pieretti et al. 2011)
  • Acoustic Richness index (AR)
  • Acoustic dissimilarity index (D) (Depraetere et al. 2012)
  • Within-group (α) indices (Sueur et al. 2014)
  • Between-group (β) indices
  • acoustic diversity = Shannon index of intensity per frequency (Pekin et al. 2012)

Discussion

ARU’s can answer ecological questions scaling from individual monitoring to community assemblage (Blumstein et al. 2011). With bird species that are well-monitored by ARU’s, life history detail gleaned can even surpass more traditional recapture methods (Mennill 2011)! With “song fingerprints” taking the place of color bands, it is possible to map individual movement patterns (Kirschel et al. 2011). Most often, this means mapping territorial males (Frommolt and Tauchert 2014). If individuals detected at the same place are acoustically distinguishable, it may be possible to estimate abundance, and thus a given species’ population density from recording surveys (Dawson and Efford 2009). There are several species that have been shown to be distinguishable to individual with recording analysis (Ehnes and Foote 2015). This allows for broad scale population monitoring, which may be especially important for threatened species (Bardeli et al. 2010). Further, community descriptors such as species composition may be approximated by characteristics of the soundscape (Celis-Murillo et al. 2009). Community metrics have been found to correlate back to landscape metrics, which may make them useful for conservation (Tucker et al. 2014).

Where we are now

There are still logistical analytical hurdles to overcome, and the development and comparison of methods for sound analysis has paralleled many trends in ecology (Kirschel et al. 2009). For one, ARU’s can present a big data problem, so automating sound analysis is a priority (Towsey et al. 2014b). Because of the promise of ARU’s, though, long-term recording projects are being designed (Turgeon et al. 2017). Right now, we are on the journey from manual to automated classification of songs, falling somewhere in the realm of “semi-automation” (Goyette et al. 2011). Recent efforts in enhancing automated analysis focus on sampling techniques for days-worth of recordings (Wimmer et al. 2013). Now, we can automate at least some species identification in recordings (Potamitis et al. 2014). However, it appears that automation partly depends on the template-matching algorithms used (Joshi et al. 2017).

Literature Cited

Acevedo, M. A., C. J. Corrada-Bravo, H. Corrada-Bravo, L. J. Villanueva-Rivera, and T. M. Aide. 2009. Automated classification of bird and amphibian calls using machine learning: A comparison of methods. Ecological Informatics 4:206–214.

Acevedo, M. A., and L. J. Villanueva-Rivera. 2006. Using Automated Digital Recording Systems as Effective Tools for the Monitoring of Birds and Amphibians. Wildlife Society Bulletin 34:211–214.

Aide, T. M., C. Corrada-Bravo, M. Campos-Cerqueira, C. Milan, G. Vega, and R. Alvarez. 2013. Real-time bioacoustics monitoring and automated species identification. PeerJ 1:e103.

Alquezar, R. D., and R. B. Machado. 2015. Comparisons Between Autonomous Acoustic Recordings and Avian Point Counts in Open Woodland Savanna. The Wilson Journal of Ornithology 127:712–723.

Bardeli, R., D. Wolff, F. Kurth, M. Koch, K. H. Tauchert, and K. H. Frommolt. 2010. Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters 31:1524–1534.

Blumstein, D. T., D. Mennill, P. Clemins, L. Girod, K. Yao, G. Patricelli, J. L. Deppe, A. H. Krakauer, C. Clark, K. A. Cortopassi, S. F. Hanser, B. McCowan, A. M. Ali, and A. N. G. Kirschel. 2011. Acoustic monitoring in terrestrial environments: applications, technological considerations and prospectus. Journal of Applied Ecology 48:758–767.

Brandes, T. S. 2008. Automated sound recording and analysis techniques for bird surveys and conservation. Bird Conservation International 18:S163–S173. Cambridge University Press.

Celis-Murillo, A., J. L. Deppe, and M. F. Allen. 2009. Using soundscape recordings to estimate bird species abundance, richness, and composition. Journal of Field Ornithology 80:64–78. Blackwell.

Celis-Murillo, A., J. L. Deppe, and M. P. Ward. 2012. Effectiveness and utility of acoustic recordings for surveying tropical birds. Journal of Field Ornithology 83:166–179.

Dawson, D. K., and M. G. Efford. 2009. Bird population density estimated from acoustic signals. Journal of Applied Ecology 46:1201–1209.

Depraetere, M., S. Pavoine, F. Jiguet, A. Gasc, S. Duvail, and J. Sueur. 2012. Monitoring animal diversity using acoustic indices: Implementation in a temperate woodland. Ecological Indicators 13:46–54.

Digby, A., M. Towsey, B. D. Bell, and P. D. Teal. 2013. A practical comparison of manual and autonomous methods for acoustic monitoring. L. Giuggioli, editor. Methods in Ecology and Evolution 4:675–683.

Ehnes, M., and J. R. Foote. 2015. Comparison of autonomous and manual recording methods for discrimination of individually distinctive Ovenbird songs. Bioacoustics 24:111–121.

Farnsworth, A., and S. A. Gauthreaux. 2004. A comparison of nocturnal call counts of migrating birds and reflecti v ity measurements on Doppler radar. Journal of Avian Biology 35:365–369.

Frommolt, K. H., and K. H. Tauchert. 2014. Applying bioacoustic methods for long-term monitoring of a nocturnal wetland bird. Ecological Informatics 21:4–12.

Goyette, J. L., R. W. Howe, A. T. Wolf, and W. D. Robinson. 2011. Detecting tropical nocturnal birds using automated audio recordings. Journal of Field Ornithology 82:279–287.

Haselmayer, J., and J. S. Quinn. 2000. A Comparison Of Point Counts And Sound Recording As Bird Survey Methods In Amazonian Southeast Peru. The Condor 102:887–893.

Hobson, K. a., R. S. Rempel, H. Greenwood, B. Turnbull, and S. L. Van Wilgenburg. 2002. Acoustic surveys of birds using electronic recordings: New potential from an omnidirectional microphone system. Wildlife Society Bulletin 30:709–720.

Holmes, S. B., K. A. McIlwrick, and L. A. Venier. 2014. Using automated sound recording and analysis to detect bird species-at-risk in southwestern Ontario woodlands. Wildlife Society Bulletin 38:591–598.

Joshi, K. A., R. A. Mulder, and K. M. C. Rowe. 2017. Comparing manual and automated species recognition in the detection of four common south-east Australian forest birds from digital field recordings. EMU 117:233–246. Taylor & Francis.

Kirschel, A. N. G., M. L. Cody, Z. T. Harlow, V. J. Promponas, E. E. Vallejo, and C. E. Taylor. 2011. Territorial dynamics of Mexican Ant-thrushes Formicarius moniliger revealed by individual recognition of their songs. Ibis 153:255–268. Blackwell.

Kirschel, A. N. G., D. A. Earl, Y. Yao, I. A. Escobar, E. Vilches, E. E. Vallejo, and C. E. Taylor. 2009. Using songs to identify individual mexican antthrush formicarius moniliger: Comparison of four classification methods. Bioacoustics 19:1–20. Taylor & Francis Group.

Mennill, D. J. 2011. Individual distinctiveness in avian vocalizations and the spatial monitoring of behaviour. Ibis 153:235–238.

Mennill, D. J., J. M. Burt, K. M. Fristrup, and S. L. Vehrencamp. 2006. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. The Journal of the Acoustical Society of America 119:2832–2839.

Pekin, B. K., J. Jung, L. J. Villanueva-Rivera, B. C. Pijanowski, and J. A. Ahumada. 2012. Modeling acoustic diversity using soundscape recordings and LIDAR-derived metrics of vertical forest structure in a neotropical rainforest. Landscape Ecology 27:1513–1522.

Pieretti, N., A. Farina, and D. Morri. 2011. A new methodology to infer the singing activity of an avian community: The Acoustic Complexity Index (ACI). Ecological Indicators 11:868–873.

Pijanowski, B. C., A. Farina, S. H. Gage, S. L. Dumyahn, and B. L. Krause. 2011a. What is soundscape ecology? An introduction and overview of an emerging new science. Landscape Ecology 26:1213–1232.

Pijanowski, B. C., L. J. Villanueva-Rivera, S. L. Dumyahn, A. Farina, B. L. Krause, B. M. Napoletano, S. H. Gage, and N. Pieretti. 2011b. Soundscape Ecology: The Science of Sound in the Landscape. BioScience 61:203–216.

Potamitis, I., S. Ntalampiras, O. Jahn, and K. Riede. 2014. Automatic bird sound detection in long real-field recordings: Applications and tools. Applied Acoustics 80:1–9.

Rempel, R. S., C. M. Francis, J. N. Robinson, and M. Campbell. 2013. Comparison of audio recording system performance for detecting and monitoring songbirds. Journal of Field Ornithology 84:86–97.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2009. Vocal Individuality of Great Gray Owls in the Sierra Nevada. Journal of Wildlife Management 73:755–760.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2012. Autonomous Recording of Great Gray Owls in the Sierra Nevada. Northwestern Naturalist 93:138–144.

Sidie-Slettedahl, A. M., K. C. Jensen, R. R. Johnson, T. W. Arnold, J. E. Austin, and J. D. Stafford. 2015. Evaluation of autonomous recording units for detecting 3 species of secretive marsh birds. Wildlife Society Bulletin 39:626–634.

Sueur, J., A. Farina, A. Gasc, N. Pieretti, and S. Pavoine. 2014. Acoustic indices for biodiversity assessment and landscape investigation. Acta Acustica united with Acustica 100:772–781.

Swiston, K. A., and D. J. Mennill. 2009. Comparison of Manual and Automated Methods for Identifying Target Sounds in Audio Recordings of Pileated , Pale-Billed , and Putative Ivory-Billed Woodpeckers Published by : Wiley on behalf of Association of Field Ornithologists content in a trusted digit. Journal of Field Ornithology 80:42–50.

Towsey, M., J. Wimmer, I. Williamson, and P. Roe. 2014a. The use of acoustic indices to determine avian species richness in audio-recordings of the environment. Ecological Informatics 21:110–119.

Towsey, M., L. Zhang, M. Cottman-Fields, J. Wimmer, J. Zhang, and P. Roe. 2014b. Visualization of long-duration acoustic recordings of the environment. Pages 703–712 in. Procedia Computer Science. Volume 29.

Tucker, D., S. H. Gage, I. Williamson, and S. Fuller. 2014. Linking ecological condition and the soundscape in fragmented Australian forests. Landscape Ecology 29:745–758.

Turgeon, P. J., S. L. Van Wilgenburg, and K. L. Drake. 2017. Microphone variability and degradation: implications for monitoring programs employing autonomous recording units. Avian Conservation and Ecology 12:9. The Resilience Alliance.

Venier, L. A., S. B. Holmes, G. W. Holborn, K. A. Mcilwrick, and G. Brown. 2012. Evaluation of an Automated Recording Device for Monitoring Forest Birds. Wildlife Society Bulletin 36:30–39. John Wiley & Sons.

Waddle, J. H., T. F. Thigpen, and B. M. Glorioso. 2009. Efficacy of automatic vocalization recognition software for anuran monitoring. Herpetological Conservation and Biology 4:384–388.

Wimmer, J., M. Towsey, P. Roe, and I. Williamson. 2013. Sampling environmental acoustic recordings to determine bird species richness. Ecological Applications 23:1419–1428.

Some Tidbits (& Tid-Bytes?) for Linux

I use Google Drive through google-drive-ocamlfuse because it has a native mount, so it looks/acts much like the folder you’d be accustomed to on Windows. However, it’s much slower to sync, so be aware of that (i.e. it doesn’t do what the Windows side does, which is to make a folder and sync on its own time or not at all). So, if you have a big file, the best thing to do still is to upload over the web interface. On that same note, working with files in that folder is still “remote” so though you can map to it, you may find processes a lot slower unless you move the files locally.

Google Earth Image Exporting for Download

We were breaking apart our region of interest to facilitate downloads, but I decided instead to let Google tile the image for me. Some things I noticed: even when providing a region for export, the extent is used as opposed to an actual “clip” of the image. However, when you clip the image earlier in the script, it appears that the extra region given is just filled with 0’s (so perhaps you can download a smaller image of the same extent by providing the clip).

Some weird notes on differences between the *.tifs surely resulting from some part of from what I’ve changed…

  • the mask that makes only 1’s part of the image: in some things I was trying before it fussed at me seemingly because of the mask so by exporting an image of 0’s and 1’s it seemed to help

What results from both is an image of 0’s and 1’s, but oddly “out of the box” the 1st TIFF displays as expected (i.e. in ArcGIS, even though the auto color scheme is stretched, you can see the wetlands). However, the one with the bullet point changes made does not: it looks all black (the value assigned at the 0 end) until you display with discrete colors. Then, the images look analogous. Looking closer, it looks like the original script exports with unsigned integer and 8 bit pixel depth, whereas my new script exports with floating point and 32 but pixel depth. This is making the newer images way bigger (and I think unnecessarily).

What I’m Doing in Google Earth Engine

Right now, there’s a data set we want that’s distributed exclusively on Earth Engine. So, to break it into manageable/meaningful chunks for our analysis, we’ve created regions. We basically need to generate the raster (requires a “max” function for the time period of interest) for each year, and then clip the raster to each region and download it. All of this is exclusively done in the JavaScript IDE on the site.

Already, some data distribution hurdles for this platform are identified (to its credit, it sort of isn’t built for that purpose; it has many other strengths, i.e. actually being able to do processing). Possibly one of the funkiest things about Earth Engine is the distinction (and difference in how it’s coded) between “client-side” and “server-side” functions. So, whenever you make something “ee.” it’s server side: it means that the server is processing whatever it is into the thing you want. So, you have to deal with the resulting objects differently depending on what “side” you’re on.

Some Processing I’ve Done in GDAL

I have GDAL installed in Linux (i.e. the easiest way to install/implement it) so the following examples represent command line usage. I have used a smattering of different GDAL utilities, and the links in the descriptions go to the manual page for the utility in each example. I have incorporated these example commands into various bash scripts if I need to implement them in an iteration over multiple files.

This is an example of re-sampling a raster to larger pixel sizes (in this case to a lower resolution of 0.09 degree from the original 0.0002 degree [30m LANDSAT pixels]) by taking the mean of the pixels encompassed in the coarser output.

gdalwarp -tr 0.09 0.09 -r average -ot Float32 file.tif fileout.tif
Rplot
This is the output of the above command, where a 30m resolution image was scaled up to 10km resolution image. The new pixel values are the averages of the binary raster pixels (0,1) that contributed to them. All maps are plotted in R 3.4.1

I have used GDAL to set no data values (in this case, I reclassified 0 as the no data value).

gdal_translate -of GTiff -a_nodata 0 PPR/2000/Playas_2000.tif region_img/Playas.tif

Here’s an example of stitching 2 tiff’s together into a larger area, and setting the output no data value to be where the 0’s were in the original input files.

gdal_merge.py -o region_img/Canada.tif -a_nodata 0 PPR/2000/Alberta_PPR_2000.tif PPR/2000/SAK_MAN_PPR_2000.tif
Rplot01
Notice blue is the only color, representing the only value left in the raster (1).

If you want to stitch shape files together into a new file, you have to initialize the new shape file first with one of your input files and then add to it.

ogr2ogr regions/Canada.shp shapefiles/Alberta_PPR_2001.shp
ogr2ogr -update -append regions/Canada.shp shapefiles/SAK_MAN_PPR_2001.shp -nln Canada

If you’re going to, say, turn your raster into polygons, you can get rid of clumps below a certain threshold before doing so (in this case, I’m getting rid of single pixels in my unary raster when using an 8-neighbor clumping rule).

gdal_sieve.py -st 2 -8 file.tif

Then, I can make my polygon layer, simplified. In the 2nd line, I project the shapefile to Albers Equal Area.

gdal_polygonize.py -8 file.tif -f "ESRI Shapefile" shapefile/file.shp 
ogr2ogr -f "ESRI Shapefile" -progress outfile.shp shapefile/file.shp -t_srs "EPSG:5070"
Rplot02
Here’s a shape file of wetlands created from the TIFF of the Canadian Prairie Pothole Region!

Waterfowl & Wetlands Literature Review

Studying waterfowl with large extant datasets is intimidating because I often have the sneaking suspicion “someone has done this before.” I’m in the process of figuring out which of my suspicions are correct.

  • Are there more ducks where there are more wetlands in the surrounding landscape? If so, what scale is relevant to predict waterfowl abundance (e.g. are there more waterfowl when the 10km surrounding them have more wetlands)?
    • No…
      • duck abundance on a given pond is lower when there are more wetlands in the surrounding landscape (Bartzen et al. 2017)
        • The Waterfowl Breeding Population and Habitat Survey (WBPHS) 1993-2002
        • Canadian PPR
        • generalized least-squares regression models
        • compound symmetry covariance structure to account for repeated annual counts on ponds
        • AIC model selection

Literature Cited

Bartzen, B., Dufour, K. W., Bidwell, M. T., Watmough, M. D. and Clark, R. G. (2017), Relationships between abundances of breeding ducks and attributes of Canadian prairie wetlands. Wildl. Soc. Bull., 41: 416–423. doi:10.1002/wsb.794

Learning to use Google Earth Engine

I’m poking my way around Google Earth Engine and seeing what’s here, as well as learning some basics of the web IDE. They have image and feature collections, accessible through functions…

  • ImageCollection()
  • FeatureCollection()

You can pass the name of a collection to FeatureCollection() to query it. Sometimes these are actually Fusion Tables. It’s kind of weird to get to the fusion tables, but let’s say you have a script or something else with a reference to their ID. You can open it by…

https://fusiontables.google.com/data?docid=<whatever that long ID is>

Variables are modifiable through .function(). Lines end with semicolons.