Automated Blue Whale Song Transcription Across Variable Acoustic Contexts

OCEANS 2019 - MARSEILLE(2019)

Cited 1|Views2
No score
Abstract
The size of sound archives collected globally by the community to monitor cetaceans, including blue whales, is rapidly increasing. Analyzing these vast amounts of data efficiently requires reliable automated detection algorithms. Typically these algorithms focus on a specific call type produced by a single species. We developed an automatic transcription algorithm which can identify multiple concurrently calling species in sound recordings. The algorithm was tested on data containing series of calls (songs) of Madagascar pygmy blue whales and series of the 27 Hz tonal unit named "P-call". The algorithm is based on pattern recognition of tonal calls in the time frequency domain where (1) segmentation is realized by detection of tonal signals, (2) features are extracted from their time frequency-amplitude information, and (3) classification is realized by clustering. The classified tonal signals are then used to reconstruct, separately, the underlying songs. We successfully trained and tested the algorithm on data (> 4000 annotated calls) in the Western Indian Ocean and achieved an overall precision of 97.2% and a recall of 96.9%, respectively.
More
Translated text
Key words
Passive acoustic monitoring, Signal processing, Bioacoustics, Blue whale
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined