Skip to main content
Dryad

Wavelet filters for automated recognition of birdsong in long-time field recordings

Cite this dataset

Priyadarshani, Nirosha et al. (2020). Wavelet filters for automated recognition of birdsong in long-time field recordings [Dataset]. Dryad. https://doi.org/10.5061/dryad.f4qrfj6sd

Abstract

1. Ecoacoustics has the potential to provide a large amount of information about the abundance of many animal species at a relatively low cost. Acoustic recording units are widely used in field data collection, but the facilities to reliably process the data recorded -- recognising calls that are relatively infrequent, and often significantly degraded by noise and distance to the microphone -- are not well developed yet. 2. We propose a call detection method for continuous field recordings that can be trained quickly and easily on new species, and degrades gracefully with increased noise or distance from the microphone. The method is based on the reconstruction of the sound from a subset of the wavelet nodes (elements in the wavelet packet decomposition tree). It is intended as a preprocessing filter, therefore we aim to minimise false negatives: false positives can be removed in subsequent processing, but missed calls will not be looked at again. 3. We compare our method to standard call detection methods, and also to machine learning methods (using as input features either wavelet energies or Mel-Frequency Cepstral Coefficients (MFCC)) on real-world noisy field recordings of six bird species. The results show that our method has higher recall (proportion detected) than the alternative methods: 87% with 85% specificity on >53 hrs of test data, resulting in an 80% reduction in the amount of data that needed further verification. It detected >60% of calls that were extremely faint (far away), even with high background noise. 4. This preprocessing method is available in our AviaNZ bioacoustic analysis program and enables the user to significantly reduce the amount of subsequent processing required (whether manual or automatic) to analyse continuous field recordings collected by spatially and temporally large-scale monitoring of animal species. It can be trained to recognise new species without difficulty, and if several species are sought simultaneously, filters can be run in parallel.

Usage notes

These are the annotated acoustic datasets of five species used in the paper: brown kiwi (Apteryx mantelli), morepork (Ninox novaeseelandiae), kakapo (Strigops habroptilus), fantail (Rhipidura fuliginosa), and saddleback (Philesturnus rufusater). Each dataset includes the sound (.wav) and annotation files (.data; AviaNZ format). The recordings were mainly collected using omnidirectional acoustic recorders from the New Zealand Department of Conservation (DOC) while some recordings were made using SM2 recorders. The data was collected from variety of locations accorss New Zealand.

Funding

Royal Society of New Zealand Te Aparangi, Award: 17-MAU-154

Te Punaha Matatini - New Zealand Centre of Research Excellence in Complex Systems

Kiwi Recovery Group, New Zealand Department of Conservation

National Science Challenge on Science for Technological Innovation