My Favorite Bird Sightings of 2017

This year “began” when I got back from vacation, to see a curve-billed thrasher on 1/14 with Paul coming to a feeder in MN! Admittedly, much of the rest of the year was a drought, with some crucial misses. Just as the birding demoralization was setting in, though, it came time for our trip out west! It was our first time to the Pacific NW so birding a new place was magical. I wrote posts along the way at the time that tell the more detailed story, but here were our lifers…

  • Pacific-slope Flycatcher
  • Hutton’s Vireo
  • Black-throated Gray Warbler
  • Vaux’s Swift
  • Pacific Wren
  • Brandt’s Cormorant
  • Surfbird
  • Mew Gull
  • Pigeon Guillemot
  • Marbled Murrelet
  • Rhinoceros Auklet
  • Northwestern Crow
  • Black Oystercatcher
  • Black Turnstone
  • Common Murre
  • Bushtit
  • Steller’s Jay
  • Chestnut-backed Chickadee
  • Swallow-tailed Gull
  • Heermann’s Gull
  • Pelagic Cormorant
  • Glaucous-winged Gull

Then the fall brought me not a lifer, but a “deck bird” Short-eared Owl! I first saw it when it bombed down over my building and kept on flying across the street, only to bank and head back my way. Then, I saw it (or maybe a different one) a few weeks later hunting the dunes across the street!

The year wrapped up in late fall with some really satisfying looks at nemeses! I finally caught up with my lifer Pacific Loon on Lake Superior. Then, Paul and I saw our lifer American Three-toed Woodpecker in Two Harbors, MN!

Building Networks for Wetland Connectivity

We’ve been playing with ways to construct networks according to graph theory, ultimately using the R package igraph to investigate wetland connectivity. I can’t reveal too much here because it’s my current research in progress! 🙂 The part of the process I’m currently trying to make more efficient is how to calculate distances for each wetland, to each wetland within a certain distance threshold. Ways I can go…

  • Create new spatial objects that represent minimum polygons around my networks, do distance analysis on these. My concern is that the “new shape” will be too imprecise, and it will skew our results (or I’ll have to use the computational power anyway to confirm a distance calculation/figure out what the nearest wetland in the shape is). I’m wondering if this will create new borders that are within 5k.
  • Create an adjacency matrix for the polygons, and then exclude neighbors from consideration as they’re lumped into networks

Thoughts on Birding Without Binoculars

An interesting topic came up about the virtues of birding without binoculars. I have only really done this when I’ve forgotten my binoculars but had some desire/need to go out anyway. One time, it ended up being a fun exercise for someone I was mentoring in their beginning birding journey.

First off, it’s humbling: you’re reduced to your mere mortal eyesight, and your optical superpower is gone. It puts you back in leagues with everything around you, which is a good feeling while being out in nature. In other words, your senses are limited to your species capability, and nothing else to aid you. You and every other species you encounter only has their natural equipment to help them take in the ecosystem around them, so the playing field is a bit more leveled. Also, in getting back to your species capabilities, I had to be all the more careful stalking a bird, because I needed to be closer to it to get a good look. This may even lead the observer to be more careful, and less apt to flush: it’s tempting to flush a bird when you know it will perch more out in the open, but birding without binoculars means you still need to keep it close enough to look at. Does this promote “lower impact” birding in the sense of less disturbance to a bird? Also, does it promote more careful, quiet walks in the woods as opposed to car birding to check out things at a distance/take in as many species as easily observable from a spot?

I tend to be, like perhaps everyone else, more thrilled with close, naked-eye looks at birds than through optics. Those sightings are pinnacle observations to me anyway. Not only is it best to see them without needing any aid, but it’s also just the thrill of a wild animal coming close to you, and the delicate nature of the situation required to “earn its trust” if even for a few moments. You learn even more about nature when you have to be careful, and achieve your goal of getting close without disturbing it. Also, in those times where you’re so close that a movement could flush a bird, you carefully watch in awe until it leaves you, which means you may spend more time watching it than you would from a distance. For me, that enhances connection to the birds I’m observing.

So, with this “handicap” of not having optics, what do you observe? What cues do you have to rely on more? Do you even notice things you may not have otherwise? I experienced a bit more of a “gestalt” of the bird community around me: patterns of activity, and perhaps a more nuanced appreciation of micro-site habitat use areas. It goes without saying that I relied on my ears all the more, as it became sometimes the only clue for even birds I would have otherwise been able to look at. In that, even being able to identify “small sounds” became all the more critical. Instead of a chip note pointing me toward the presence of a bird, the chip note itself became more important, and thus more desirable to identify.

Also, trying to “turn up a rarity” sort of goes out the window. You’re more focused on what you know to be around you, than picking through far-away birds for the sake of finding an unusual species. This can help get you out of “listing mode” and more into natural appreciation. I’d say there’s a time and a place for both, but it’s possible to lose appreciation for the common when you’re listing (and to be fair, it’s also possible to do both with mindful attention to nature).

I think the use of binoculars, all things considered, hinders birding…I think of binoculars in the same manner that I think of cars…In the dead of winter, you’ll see a lot more Red-tailed Hawks per hour from a moving car than just by walking or standing around. But imagine if you always stayed in your car while birding. Most of the time, I would say, it’s good to get out of the car. And most of the time—indeed, almost all of the time, I would say—it’s good to leave your binoculars behind.” – Ted Floyd

Ted goes on to describe outings that specifically encourage you to leave your binoculars at home, and the nuances in bird ID you learn along the way! In his group outing, they made extra effort to not only identify to species but age/sex when possible, which required careful observation. He also claims that they could have overlooked warblers if they had been using binoculars, which is counterintuitive. Yet, he claims that the dynamics of the group changed to being less disruptive, and also potentially more cohesive.

At the very least, it could be a new frontier for you, and a new challenge if you’ve been birding for awhile and are looking to up your game. Maybe planning for it will ease my angst about sending off my binoculars for a much-needed cleaning and tune-up, where I’d be without them for a few weeks. Hey, it’s scoping season around here anyway on the big lake, so I could always a hybrid approach instead of “cold turkey” this time of year…

Autonomous Recording Units for Birds

I finally wrote a post that fits all of my blog categories! 🙂 Years ago, Dr. T. Mitchell Aide visited my former lab and I had an opportunity to meet with him. Hearing about his work with automated classification of bird calls first got my mind churning about how we can use ARU’s for gathering field data (Aide et al. 2013). His work focused on the tropics, and thus the complexity of the animal soundscape (Acevedo et al. 2009). I further became interested as I came across the technique for monitoring my nemesis bird, yellow rail! Also, ARU’s have been tested with respect to the Breeding Bird Survey (BBS) which has been the focus of most of my research to date (Rempel et al. 2013). Soundscape ecology is a relatively new area of research (Pijanowski et al. 2011a) and is considered a branch of my current field, landscape ecology (Pijanowski et al. 2011b).

History

It is important to evaluate ARU’s for avian study, because vocalization accounts for most detection (Acevedo and Villanueva-Rivera 2006). The utility of automated recording units (ARU) for ecology has been investigated for well over a decade, so as anything in ecology methods and technology are always evolving (Haselmayer and Quinn 2000). Thus, there is something of a literary trail as technology has improved, both with respect to recorders and analysis (Haselmayer and Quinn 2000). For example, automated classification was out of reach by the standard of reliability not long ago (Swiston and Mennill 2009). Manual classification seemed to be the only reliable way to identify songs in recordings (Waddle et al. 2009).

Methods

ARU’s have their pros and cons, as well as applicability (Brandes 2008). Recording sounds can provide a less invasive alternative to direct observation, detect hard-to-observe species and sample a large area. For example, an early (and ongoing) application of recording was to monitor nocturnal migration (Farnsworth and Gauthreaux 2004). Additionally, recordings are reviewable, and do not have human listening bias (Digby et al. 2013). This paves the way for standardizing observer effort and capability (Hobson et al. 2002). In comparison to a point count, the visual component is lost, but detection of species by the recording units appears to be relatively high (Alquezar and Machado 2015). Yet, if it is easier to detect a target species visually, ARU’s may not sample them as well as a point count (Celis-Murillo et al. 2012). For at least some species, the best method appears to be to combine point counts and ARU’s for detection (Holmes et al. 2014). While there is now an ever-growing body of literature on the applicability of ARU’s, they are often suited better to some sound qualities over others, species, or certain components of bioacoustics such as temporal patterns (Rognan et al. 2012). ARU’s have now been tested over many different ecosystem types, and results are generally favorable (Venier et al. 2012). However, they may not be able to sample species well that vocalize infrequently and/or are sparsely distributed (Sidie-Slettedahl et al. 2015). There are different configurations of ARU’s to answer different ecological questions (Mennill et al. 2006).

Analysis

Various indices aid in interpreting recordings (Towsey et al. 2014a). There is an R package “soundecology” that now calculates a number of indices from recordings!

  • canonical discriminant analysis (CDA): identifying individuals (Rognan et al. 2009)
  • Acoustic Complexity Index: proxy for species richness (Pieretti et al. 2011)
  • Acoustic Richness index (AR)
  • Acoustic dissimilarity index (D) (Depraetere et al. 2012)
  • Within-group (α) indices (Sueur et al. 2014)
  • Between-group (β) indices
  • acoustic diversity = Shannon index of intensity per frequency (Pekin et al. 2012)

Discussion

ARU’s can answer ecological questions scaling from individual monitoring to community assemblage (Blumstein et al. 2011). With bird species that are well-monitored by ARU’s, life history detail gleaned can even surpass more traditional recapture methods (Mennill 2011)! With “song fingerprints” taking the place of color bands, it is possible to map individual movement patterns (Kirschel et al. 2011). Most often, this means mapping territorial males (Frommolt and Tauchert 2014). If individuals detected at the same place are acoustically distinguishable, it may be possible to estimate abundance, and thus a given species’ population density from recording surveys (Dawson and Efford 2009). There are several species that have been shown to be distinguishable to individual with recording analysis (Ehnes and Foote 2015). This allows for broad scale population monitoring, which may be especially important for threatened species (Bardeli et al. 2010). Further, community descriptors such as species composition may be approximated by characteristics of the soundscape (Celis-Murillo et al. 2009). Community metrics have been found to correlate back to landscape metrics, which may make them useful for conservation (Tucker et al. 2014).

Where we are now

There are still logistical analytical hurdles to overcome, and the development and comparison of methods for sound analysis has paralleled many trends in ecology (Kirschel et al. 2009). For one, ARU’s can present a big data problem, so automating sound analysis is a priority (Towsey et al. 2014b). Because of the promise of ARU’s, though, long-term recording projects are being designed (Turgeon et al. 2017). Right now, we are on the journey from manual to automated classification of songs, falling somewhere in the realm of “semi-automation” (Goyette et al. 2011). Recent efforts in enhancing automated analysis focus on sampling techniques for days-worth of recordings (Wimmer et al. 2013). Now, we can automate at least some species identification in recordings (Potamitis et al. 2014). However, it appears that automation partly depends on the template-matching algorithms used (Joshi et al. 2017).

Literature Cited

Acevedo, M. A., C. J. Corrada-Bravo, H. Corrada-Bravo, L. J. Villanueva-Rivera, and T. M. Aide. 2009. Automated classification of bird and amphibian calls using machine learning: A comparison of methods. Ecological Informatics 4:206–214.

Acevedo, M. A., and L. J. Villanueva-Rivera. 2006. Using Automated Digital Recording Systems as Effective Tools for the Monitoring of Birds and Amphibians. Wildlife Society Bulletin 34:211–214.

Aide, T. M., C. Corrada-Bravo, M. Campos-Cerqueira, C. Milan, G. Vega, and R. Alvarez. 2013. Real-time bioacoustics monitoring and automated species identification. PeerJ 1:e103.

Alquezar, R. D., and R. B. Machado. 2015. Comparisons Between Autonomous Acoustic Recordings and Avian Point Counts in Open Woodland Savanna. The Wilson Journal of Ornithology 127:712–723.

Bardeli, R., D. Wolff, F. Kurth, M. Koch, K. H. Tauchert, and K. H. Frommolt. 2010. Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters 31:1524–1534.

Blumstein, D. T., D. Mennill, P. Clemins, L. Girod, K. Yao, G. Patricelli, J. L. Deppe, A. H. Krakauer, C. Clark, K. A. Cortopassi, S. F. Hanser, B. McCowan, A. M. Ali, and A. N. G. Kirschel. 2011. Acoustic monitoring in terrestrial environments: applications, technological considerations and prospectus. Journal of Applied Ecology 48:758–767.

Brandes, T. S. 2008. Automated sound recording and analysis techniques for bird surveys and conservation. Bird Conservation International 18:S163–S173. Cambridge University Press.

Celis-Murillo, A., J. L. Deppe, and M. F. Allen. 2009. Using soundscape recordings to estimate bird species abundance, richness, and composition. Journal of Field Ornithology 80:64–78. Blackwell.

Celis-Murillo, A., J. L. Deppe, and M. P. Ward. 2012. Effectiveness and utility of acoustic recordings for surveying tropical birds. Journal of Field Ornithology 83:166–179.

Dawson, D. K., and M. G. Efford. 2009. Bird population density estimated from acoustic signals. Journal of Applied Ecology 46:1201–1209.

Depraetere, M., S. Pavoine, F. Jiguet, A. Gasc, S. Duvail, and J. Sueur. 2012. Monitoring animal diversity using acoustic indices: Implementation in a temperate woodland. Ecological Indicators 13:46–54.

Digby, A., M. Towsey, B. D. Bell, and P. D. Teal. 2013. A practical comparison of manual and autonomous methods for acoustic monitoring. L. Giuggioli, editor. Methods in Ecology and Evolution 4:675–683.

Ehnes, M., and J. R. Foote. 2015. Comparison of autonomous and manual recording methods for discrimination of individually distinctive Ovenbird songs. Bioacoustics 24:111–121.

Farnsworth, A., and S. A. Gauthreaux. 2004. A comparison of nocturnal call counts of migrating birds and reflecti v ity measurements on Doppler radar. Journal of Avian Biology 35:365–369.

Frommolt, K. H., and K. H. Tauchert. 2014. Applying bioacoustic methods for long-term monitoring of a nocturnal wetland bird. Ecological Informatics 21:4–12.

Goyette, J. L., R. W. Howe, A. T. Wolf, and W. D. Robinson. 2011. Detecting tropical nocturnal birds using automated audio recordings. Journal of Field Ornithology 82:279–287.

Haselmayer, J., and J. S. Quinn. 2000. A Comparison Of Point Counts And Sound Recording As Bird Survey Methods In Amazonian Southeast Peru. The Condor 102:887–893.

Hobson, K. a., R. S. Rempel, H. Greenwood, B. Turnbull, and S. L. Van Wilgenburg. 2002. Acoustic surveys of birds using electronic recordings: New potential from an omnidirectional microphone system. Wildlife Society Bulletin 30:709–720.

Holmes, S. B., K. A. McIlwrick, and L. A. Venier. 2014. Using automated sound recording and analysis to detect bird species-at-risk in southwestern Ontario woodlands. Wildlife Society Bulletin 38:591–598.

Joshi, K. A., R. A. Mulder, and K. M. C. Rowe. 2017. Comparing manual and automated species recognition in the detection of four common south-east Australian forest birds from digital field recordings. EMU 117:233–246. Taylor & Francis.

Kirschel, A. N. G., M. L. Cody, Z. T. Harlow, V. J. Promponas, E. E. Vallejo, and C. E. Taylor. 2011. Territorial dynamics of Mexican Ant-thrushes Formicarius moniliger revealed by individual recognition of their songs. Ibis 153:255–268. Blackwell.

Kirschel, A. N. G., D. A. Earl, Y. Yao, I. A. Escobar, E. Vilches, E. E. Vallejo, and C. E. Taylor. 2009. Using songs to identify individual mexican antthrush formicarius moniliger: Comparison of four classification methods. Bioacoustics 19:1–20. Taylor & Francis Group.

Mennill, D. J. 2011. Individual distinctiveness in avian vocalizations and the spatial monitoring of behaviour. Ibis 153:235–238.

Mennill, D. J., J. M. Burt, K. M. Fristrup, and S. L. Vehrencamp. 2006. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. The Journal of the Acoustical Society of America 119:2832–2839.

Pekin, B. K., J. Jung, L. J. Villanueva-Rivera, B. C. Pijanowski, and J. A. Ahumada. 2012. Modeling acoustic diversity using soundscape recordings and LIDAR-derived metrics of vertical forest structure in a neotropical rainforest. Landscape Ecology 27:1513–1522.

Pieretti, N., A. Farina, and D. Morri. 2011. A new methodology to infer the singing activity of an avian community: The Acoustic Complexity Index (ACI). Ecological Indicators 11:868–873.

Pijanowski, B. C., A. Farina, S. H. Gage, S. L. Dumyahn, and B. L. Krause. 2011a. What is soundscape ecology? An introduction and overview of an emerging new science. Landscape Ecology 26:1213–1232.

Pijanowski, B. C., L. J. Villanueva-Rivera, S. L. Dumyahn, A. Farina, B. L. Krause, B. M. Napoletano, S. H. Gage, and N. Pieretti. 2011b. Soundscape Ecology: The Science of Sound in the Landscape. BioScience 61:203–216.

Potamitis, I., S. Ntalampiras, O. Jahn, and K. Riede. 2014. Automatic bird sound detection in long real-field recordings: Applications and tools. Applied Acoustics 80:1–9.

Rempel, R. S., C. M. Francis, J. N. Robinson, and M. Campbell. 2013. Comparison of audio recording system performance for detecting and monitoring songbirds. Journal of Field Ornithology 84:86–97.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2009. Vocal Individuality of Great Gray Owls in the Sierra Nevada. Journal of Wildlife Management 73:755–760.

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2012. Autonomous Recording of Great Gray Owls in the Sierra Nevada. Northwestern Naturalist 93:138–144.

Sidie-Slettedahl, A. M., K. C. Jensen, R. R. Johnson, T. W. Arnold, J. E. Austin, and J. D. Stafford. 2015. Evaluation of autonomous recording units for detecting 3 species of secretive marsh birds. Wildlife Society Bulletin 39:626–634.

Sueur, J., A. Farina, A. Gasc, N. Pieretti, and S. Pavoine. 2014. Acoustic indices for biodiversity assessment and landscape investigation. Acta Acustica united with Acustica 100:772–781.

Swiston, K. A., and D. J. Mennill. 2009. Comparison of Manual and Automated Methods for Identifying Target Sounds in Audio Recordings of Pileated , Pale-Billed , and Putative Ivory-Billed Woodpeckers Published by : Wiley on behalf of Association of Field Ornithologists content in a trusted digit. Journal of Field Ornithology 80:42–50.

Towsey, M., J. Wimmer, I. Williamson, and P. Roe. 2014a. The use of acoustic indices to determine avian species richness in audio-recordings of the environment. Ecological Informatics 21:110–119.

Towsey, M., L. Zhang, M. Cottman-Fields, J. Wimmer, J. Zhang, and P. Roe. 2014b. Visualization of long-duration acoustic recordings of the environment. Pages 703–712 in. Procedia Computer Science. Volume 29.

Tucker, D., S. H. Gage, I. Williamson, and S. Fuller. 2014. Linking ecological condition and the soundscape in fragmented Australian forests. Landscape Ecology 29:745–758.

Turgeon, P. J., S. L. Van Wilgenburg, and K. L. Drake. 2017. Microphone variability and degradation: implications for monitoring programs employing autonomous recording units. Avian Conservation and Ecology 12:9. The Resilience Alliance.

Venier, L. A., S. B. Holmes, G. W. Holborn, K. A. Mcilwrick, and G. Brown. 2012. Evaluation of an Automated Recording Device for Monitoring Forest Birds. Wildlife Society Bulletin 36:30–39. John Wiley & Sons.

Waddle, J. H., T. F. Thigpen, and B. M. Glorioso. 2009. Efficacy of automatic vocalization recognition software for anuran monitoring. Herpetological Conservation and Biology 4:384–388.

Wimmer, J., M. Towsey, P. Roe, and I. Williamson. 2013. Sampling environmental acoustic recordings to determine bird species richness. Ecological Applications 23:1419–1428.