Autonomous Recording Units for Birds

I finally wrote a post that fits all of my blog categories! 🙂 Years ago, Dr. T. Mitchell Aide visited my former lab and I had an opportunity to meet with him. Hearing about his work with automated classification of bird calls first got my mind churning about how we can use ARU’s for gathering field data (Aide et al. 2013). His work focused on the tropics, and thus the complexity of the animal soundscape (Acevedo et al. 2009). I further became interested as I came across the technique for monitoring my nemesis bird, yellow rail! Also, ARU’s have been tested with respect to the Breeding Bird Survey (BBS) which has been the focus of most of my research to date (Rempel et al. 2013). Soundscape ecology is a relatively new area of research (Pijanowski et al. 2011a) and is considered a branch of my current field, landscape ecology (Pijanowski et al. 2011b).

History

It is important to evaluate ARU’s for avian study, because vocalization accounts for most detection (Acevedo and Villanueva-Rivera 2006). The utility of automated recording units (ARU) for ecology has been investigated for well over a decade, so as anything in ecology methods and technology are always evolving (Haselmayer and Quinn 2000). Thus, there is something of a literary trail as technology has improved, both with respect to recorders and analysis (Haselmayer and Quinn 2000). For example, automated classification was out of reach by the standard of reliability not long ago (Swiston and Mennill 2009). Manual classification seemed to be the only reliable way to identify songs in recordings (Waddle et al. 2009).

Methods

ARU’s have their pros and cons, as well as applicability (Brandes 2008). Recording sounds can provide a less invasive alternative to direct observation, detect hard-to-observe species and sample a large area. For example, an early (and ongoing) application of recording was to monitor nocturnal migration (Farnsworth and Gauthreaux 2004). Additionally, recordings are reviewable, and do not have human listening bias (Digby et al. 2013). This paves the way for standardizing observer effort and capability (Hobson et al. 2002). In comparison to a point count, the visual component is lost, but detection of species by the recording units appears to be relatively high (Alquezar and Machado 2015). Yet, if it is easier to detect a target species visually, ARU’s may not sample them as well as a point count (Celis-Murillo et al. 2012). For at least some species, the best method appears to be to combine point counts and ARU’s for detection (Holmes et al. 2014). While there is now an ever-growing body of literature on the applicability of ARU’s, they are often suited better to some sound qualities over others, species, or certain components of bioacoustics such as temporal patterns (Rognan et al. 2012). ARU’s have now been tested over many different ecosystem types, and results are generally favorable (Venier et al. 2012). However, they may not be able to sample species well that vocalize infrequently and/or are sparsely distributed (Sidie-Slettedahl et al. 2015). There are different configurations of ARU’s to answer different ecological questions (Mennill et al. 2006).

Analysis

Various indices aid in interpreting recordings (Towsey et al. 2014a). There is an R package “soundecology” that now calculates a number of indices from recordings!

  • canonical discriminant analysis (CDA): identifying individuals (Rognan et al. 2009)
  • Acoustic Complexity Index: proxy for species richness (Pieretti et al. 2011)
  • Acoustic Richness index (AR)
  • Acoustic dissimilarity index (D) (Depraetere et al. 2012)
  • Within-group (α) indices (Sueur et al. 2014)
  • Between-group (β) indices
  • acoustic diversity = Shannon index of intensity per frequency (Pekin et al. 2012)

Discussion

ARU’s can answer ecological questions scaling from individual monitoring to community assemblage (Blumstein et al. 2011). With bird species that are well-monitored by ARU’s, life history detail gleaned can even surpass more traditional recapture methods (Mennill 2011)! With “song fingerprints” taking the place of color bands, it is possible to map individual movement patterns (Kirschel et al. 2011). Most often, this means mapping territorial males (Frommolt and Tauchert 2014). If individuals detected at the same place are acoustically distinguishable, it may be possible to estimate abundance, and thus a given species’ population density from recording surveys (Dawson and Efford 2009). There are several species that have been shown to be distinguishable to individual with recording analysis (Ehnes and Foote 2015). This allows for broad scale population monitoring, which may be especially important for threatened species (Bardeli et al. 2010). Further, community descriptors such as species composition may be approximated by characteristics of the soundscape (Celis-Murillo et al. 2009). Community metrics have been found to correlate back to landscape metrics, which may make them useful for conservation (Tucker et al. 2014).

Where we are now

There are still logistical analytical hurdles to overcome, and the development and comparison of methods for sound analysis has paralleled many trends in ecology (Kirschel et al. 2009). For one, ARU’s can present a big data problem, so automating sound analysis is a priority (Towsey et al. 2014b). Because of the promise of ARU’s, though, long-term recording projects are being designed (Turgeon et al. 2017). Right now, we are on the journey from manual to automated classification of songs, falling somewhere in the realm of “semi-automation” (Goyette et al. 2011). Recent efforts in enhancing automated analysis focus on sampling techniques for days-worth of recordings (Wimmer et al. 2013). Now, we can automate at least some species identification in recordings (Potamitis et al. 2014). However, it appears that automation partly depends on the template-matching algorithms used (Joshi et al. 2017).

Literature Cited

Acevedo, M. A., C. J. Corrada-Bravo, H. Corrada-Bravo, L. J. Villanueva-Rivera, and T. M. Aide. 2009. Automated classification of bird and amphibian calls using machine learning: A comparison of methods. Ecological Informatics 4:206–214.

Acevedo, M. A., and L. J. Villanueva-Rivera. 2006. Using Automated Digital Recording Systems as Effective Tools for the Monitoring of Birds and Amphibians. Wildlife Society Bulletin 34:211–214.

Aide, T. M., C. Corrada-Bravo, M. Campos-Cerqueira, C. Milan, G. Vega, and R. Alvarez. 2013. Real-time bioacoustics monitoring and automated species identification. PeerJ 1:e103.

Alquezar, R. D., and R. B. Machado. 2015. Comparisons Between Autonomous Acoustic Recordings and Avian Point Counts in Open Woodland Savanna. The Wilson Journal of Ornithology 127:712–723.

Bardeli, R., D. Wolff, F. Kurth, M. Koch, K. H. Tauchert, and K. H. Frommolt. 2010. Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters 31:1524–1534.

Blumstein, D. T., D. Mennill, P. Clemins, L. Girod, K. Yao, G. Patricelli, J. L. Deppe, A. H. Krakauer, C. Clark, K. A. Cortopassi, S. F. Hanser, B. McCowan, A. M. Ali, and A. N. G. Kirschel. 2011. Acoustic monitoring in terrestrial environments: applications, technological considerations and prospectus. Journal of Applied Ecology 48:758–767.

Brandes, T. S. 2008. Automated sound recording and analysis techniques for bird surveys and conservation. Bird Conservation International 18:S163–S173. Cambridge University Press.

Celis-Murillo, A., J. L. Deppe, and M. F. Allen. 2009. Using soundscape recordings to estimate bird species abundance, richness, and composition. Journal of Field Ornithology 80:64–78. Blackwell.

Celis-Murillo, A., J. L. Deppe, and M. P. Ward. 2012. Effectiveness and utility of acoustic recordings for surveying tropical birds. Journal of Field Ornithology 83:166–179.

Dawson, D. K., and M. G. Efford. 2009. Bird population density estimated from acoustic signals. Journal of Applied Ecology 46:1201–1209.

Depraetere, M., S. Pavoine, F. Jiguet, A. Gasc, S. Duvail, and J. Sueur. 2012. Monitoring animal diversity using acoustic indices: Implementation in a temperate woodland. Ecological Indicators 13:46–54.

Digby, A., M. Towsey, B. D. Bell, and P. D. Teal. 2013. A practical comparison of manual and autonomous methods for acoustic monitoring. L. Giuggioli, editor. Methods in Ecology and Evolution 4:675–683.

Ehnes, M., and J. R. Foote. 2015. Comparison of autonomous and manual recording methods for discrimination of individually distinctive Ovenbird songs. Bioacoustics 24:111–121.

Farnsworth, A., and S. A. Gauthreaux. 2004. A comparison of nocturnal call counts of migrating birds and reflecti v ity measurements on Doppler radar. Journal of Avian Biology 35:365–369.

Frommolt, K. H., and K. H. Tauchert. 2014. Applying bioacoustic methods for long-term monitoring of a nocturnal wetland bird. Ecological Informatics 21:4–12.

Goyette, J. L., R. W. Howe, A. T. Wolf, and W. D. Robinson. 2011. Detecting tropical nocturnal birds using automated audio recordings. Journal of Field Ornithology 82:279–287.

Haselmayer, J., and J. S. Quinn. 2000. A Comparison Of Point Counts And Sound Recording As Bird Survey Methods In Amazonian Southeast Peru. The Condor 102:887–893.

Hobson, K. a., R. S. Rempel, H. Greenwood, B. Turnbull, and S. L. Van Wilgenburg. 2002. Acoustic surveys of birds using electronic recordings: New potential from an omnidirectional microphone system. Wildlife Society Bulletin 30:709–720.

Holmes, S. B., K. A. McIlwrick, and L. A. Venier. 2014. Using automated sound recording and analysis to detect bird species-at-risk in southwestern Ontario woodlands. Wildlife Society Bulletin 38:591–598.

Joshi, K. A., R. A. Mulder, and K. M. C. Rowe. 2017. Comparing manual and automated species recognition in the detection of four common south-east Australian forest birds from digital field recordings. EMU 117:233–246. Taylor & Francis.

Kirschel, A. N. G., M. L. Cody, Z. T. Harlow, V. J. Promponas, E. E. Vallejo, and C. E. Taylor. 2011. Territorial dynamics of Mexican Ant-thrushes Formicarius moniliger revealed by individual recognition of their songs. Ibis 153:255–268. Blackwell.

Kirschel, A. N. G., D. A. Earl, Y. Yao, I. A. Escobar, E. Vilches, E. E. Vallejo, and C. E. Taylor. 2009. Using songs to identify individual mexican antthrush formicarius moniliger: Comparison of four classification methods. Bioacoustics 19:1–20. Taylor & Francis Group.

Mennill, D. J. 2011. Individual distinctiveness in avian vocalizations and the spatial monitoring of behaviour. Ibis 153:235–238.

Mennill, D. J., J. M. Burt, K. M. Fristrup, and S. L. Vehrencamp. 2006. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. The Journal of the Acoustical Society of America 119:2832–2839.

Pekin, B. K., J. Jung, L. J. Villanueva-Rivera, B. C. Pijanowski, and J. A. Ahumada. 2012. Modeling acoustic diversity using soundscape recordings and LIDAR-derived metrics of vertical forest structure in a neotropical rainforest. Landscape Ecology 27:1513–1522.

Pieretti, N., A. Farina, and D. Morri. 2011. A new methodology to infer the singing activity of an avian community: The Acoustic Complexity Index (ACI). Ecological Indicators 11:868–873.

Pijanowski, B. C., A. Farina, S. H. Gage, S. L. Dumyahn, and B. L. Krause. 2011a. What is soundscape ecology? An introduction and overview of an emerging new science. Landscape Ecology 26:1213–1232.

Pijanowski, B. C., L. J. Villanueva-Rivera, S. L. Dumyahn, A. Farina, B. L. Krause, B. M. Napoletano, S. H. Gage, and N. Pieretti. 2011b. Soundscape Ecology: The Science of Sound in the Landscape. BioScience 61:203–216.

Potamitis, I., S. Ntalampiras, O. Jahn, and K. Riede. 2014. Automatic bird sound detection in long real-field recordings: Applications and tools. Applied Acoustics 80:1–9.

Rempel, R. S., C. M. Francis, J. N. Robinson, and M. Campbell. 2013. Comparison of audio recording system performance for detecting and monitoring songbirds. Journal of Field Ornithology 84:86–97.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2009. Vocal Individuality of Great Gray Owls in the Sierra Nevada. Journal of Wildlife Management 73:755–760.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2012. Autonomous Recording of Great Gray Owls in the Sierra Nevada. Northwestern Naturalist 93:138–144.

Sidie-Slettedahl, A. M., K. C. Jensen, R. R. Johnson, T. W. Arnold, J. E. Austin, and J. D. Stafford. 2015. Evaluation of autonomous recording units for detecting 3 species of secretive marsh birds. Wildlife Society Bulletin 39:626–634.

Sueur, J., A. Farina, A. Gasc, N. Pieretti, and S. Pavoine. 2014. Acoustic indices for biodiversity assessment and landscape investigation. Acta Acustica united with Acustica 100:772–781.

Swiston, K. A., and D. J. Mennill. 2009. Comparison of Manual and Automated Methods for Identifying Target Sounds in Audio Recordings of Pileated , Pale-Billed , and Putative Ivory-Billed Woodpeckers Published by : Wiley on behalf of Association of Field Ornithologists content in a trusted digit. Journal of Field Ornithology 80:42–50.

Towsey, M., J. Wimmer, I. Williamson, and P. Roe. 2014a. The use of acoustic indices to determine avian species richness in audio-recordings of the environment. Ecological Informatics 21:110–119.

Towsey, M., L. Zhang, M. Cottman-Fields, J. Wimmer, J. Zhang, and P. Roe. 2014b. Visualization of long-duration acoustic recordings of the environment. Pages 703–712 in. Procedia Computer Science. Volume 29.

Tucker, D., S. H. Gage, I. Williamson, and S. Fuller. 2014. Linking ecological condition and the soundscape in fragmented Australian forests. Landscape Ecology 29:745–758.

Turgeon, P. J., S. L. Van Wilgenburg, and K. L. Drake. 2017. Microphone variability and degradation: implications for monitoring programs employing autonomous recording units. Avian Conservation and Ecology 12:9. The Resilience Alliance.

Venier, L. A., S. B. Holmes, G. W. Holborn, K. A. Mcilwrick, and G. Brown. 2012. Evaluation of an Automated Recording Device for Monitoring Forest Birds. Wildlife Society Bulletin 36:30–39. John Wiley & Sons.

Waddle, J. H., T. F. Thigpen, and B. M. Glorioso. 2009. Efficacy of automatic vocalization recognition software for anuran monitoring. Herpetological Conservation and Biology 4:384–388.

Wimmer, J., M. Towsey, P. Roe, and I. Williamson. 2013. Sampling environmental acoustic recordings to determine bird species richness. Ecological Applications 23:1419–1428.