Skip to main content
Dryad logo

Data from: Multiple constraints on urban bird communication: both abiotic and biotic noise shape songs in cities

Citation

To, Ann W Y; Dingle, Caroline; Collins, Sarah A (2021), Data from: Multiple constraints on urban bird communication: both abiotic and biotic noise shape songs in cities, Dryad, Dataset, https://doi.org/10.5061/dryad.k6djh9w6b

Abstract

Ambient noise can cause birds to adjust their songs to avoid masking. Most studies investigate responses to a single noise source (e.g. low-frequency traffic noise, or high-frequency insect noise). Here we investigated the effects of both anthropogenic and insect noise on vocalizations of four common bird species in Hong Kong. Common Tailorbirds (Orthotomus sutorius) and Eurasian Tree Sparrows (Passer montanus) both sang at a higher frequency in urban areas compared to peri-urban areas. Red-whiskered Bulbuls (Pycnonotus jocosus) in urban areas shifted the only first note of their song upwards. Swinhoe’s White-eye (Zosterops simplex) vocalization changes were correlated with noise level, but did not differ between the peri-urban and urban populations. Insect noise caused the Eurasian Tree Sparrow to reduce both maximum, peak frequency, and overall bandwidth of vocalizations. Insect noise also led to a reduction in maximum frequency in Red-whiskered bulbuls. The presence of both urban noise and insect noise affected the sound of the Common Tailorbirds and Eurasian Tree Sparrows; in urban areas they no longer increased their minimum song frequency when insect sounds were also present. These results highlight the complexity of the soundscape in urban areas. The presence of both high and low frequency ambient noise may make it difficult for urban birds to avoid signal masking while still maintaining their fitness in noisy cities.

Methods

We collected bird songs and noise measurement data from 11 urban and 11 peri-urban sites across Hong Kong between 17 June and 8 September 2013. Urban sites included urban parks (7 sites) or roadside green spaces (4 sites), while peri-urban sites were in, or next to, protected areas (4 sites), traditional rural villages (5 sites), or outlying islands (2 sites). Visits were made to each sampling location once during the study period between 0600 to 1400 local time (UTC +8:00). At each site, songs were recorded along a single transect, ranging from 1.5 to 4.2 km. Duration of recording ranged from two to five hours based on the length of transects. All transects followed accessible routes throughout the sampling site, such as roads, trails and footpaths. We recorded all birds that sang within the sampling period. The recording started as soon as a bird was heard singing, and stopped after the bird ceased singing. To avoid recording the same individual twice within the same site, recordings were made at least 25m apart. If more than one individual of the same species was singing at the same time, all songs in that recording were analyzed, but this was counted as one sample only. Data collection occurred under fine weather conditions, i.e. no rain or strong wind. For sites on outlying islands, we did not sample near the coastline to limit the impact of low frequency noise produced by wave action.

We recorded songs with a TASCAM DR-40 digital recorder (TASCAM, Japan) and a Superlux PRA118L shotgun microphone (Superlux, Taiwan) with windscreen. Recordings were set to mono channel mode at 24-bit WAV with 44.1 kHz sampling rate, no cut-off frequency function was applied. All birds singing along the transect line were recorded with the same settings.

            The background noise level at each site was measured using a WESEN WS1361 (WESEN, China) Type II sound level meter using C-weighting due to its sensitivity to low frequency noise. The sound level meter was set on a tripod at 1.2 meters in height and at least one meter away from any surface to avoid sound reflection, which would result in a higher reading (Environmental Protection Department 1997). This background noise level measurement was focused on the low frequency anthropogenic noise, so the noise level measurement was paused when insect noise occurred in the environment. Sound measurements were taken in three different directions (000°, 120°, 240°), using the Leq(C) (equivalent continuous sound level in C-weighting) measurement for five minutes each at the start and the end of sampling. We calculated overall background noise level for each site by averaging these values. 

As many species sing multiple types of vocalizations, for this study we chose one specific type of vocalization of each species (Figure 2) for further analysis using Avisoft SASLab Pro Version 5.2.06 (Avisoft Bioacoustic, Berlin, Germany). Spectrogram settings were: FFT length 1024 with 100% frame size and Hamming Window, which provided a 43 Hz frequency resolution and 56 Hz bandwidth resolution on the measurements. We measured the following parameters using the automatic parameter measurement function: minimum frequency, maximum frequency and peak frequency. Automatic parameter measurements were used to reduce bias and increase consistency of the measures (Zollinger et al. 2012, Ríos-Chelén et al. 2017). Bandwidth (frequency difference) was calculated as the difference between the maximum and minimum frequency. We analyzed at least three vocalizations for each individual included in this study (range: 3 - 63). All vocalizations were measured separately and then averaged for each individual.

For the automatic parameter measurements, a -15 dB threshold and 25 ms hold time was set, with the measurement taken at the start, center and the end of the vocalization. The cut-off frequency function was used on the recordings before measures were taken, based on the visual inspection of the spectrogram; a high pass frequency filter removed low frequency noise, and a low pass filter was applied on those recordings which contained continuous high frequency noise such as insect sounds. Other noise that could potentially affect the automatic measurement was cleared using the standard eraser cursor function in Avisoft using manual visual judgement. We did not include any recordings where songs were so heavily masked that the vocalization could not be clearly distinguished.

Funding

Plymouth University