Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise – let alone understand – the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, “Analysing vocal sequences in animals”. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.
Summary The accelerating loss of biodiversity worldwide demands effective tools for monitoring animal populations and informing conservation action. In habitats where direct observation is difficult (rain forests, oceans), or for cryptic species (shy, nocturnal), passive acoustic monitoring (PAM) provides cost‐effective, unbiased data collection. PAM has broad applicability in terrestrial environments, particularly tropical rain forests. Using examples from studies of forest elephants in Central African rain forest, we show how PAM can be used to investigate cryptic behaviour, mechanisms of communication, estimate population size, quantify threats, and assess the efficacy of conservation strategies. We discuss the methodologies, requirements, and challenges of obtaining these data using acoustics. Where applicable, we compare these methods to more traditional approaches. While PAM methods and associated analysis are maturing rapidly, mechanisms are needed for processing the dense raw data efficiently with standard computer hardware, speeding development of detection algorithms, and harnessing communication networks to move data from the field to research facilities. Passive acoustic monitoring is a viable and cost‐effective tool for conservation and should be incorporated in monitoring schemes much more broadly. The capability to quickly assess changes in behaviour, population size, and landscape use, simultaneously over large geographical areas, makes this approach attractive for detecting human‐induced impacts and for assessing the success of conservation strategies.
Deep neural networks have advanced the field of detection and classification and allowed for effective identification of signals in challenging data sets. Numerous time-critical conservation needs may benefit from these methods. We developed and empirically studied a variety of deep neural networks to detect the vocalizations of endangered North Atlantic right whales (Eubalaena glacialis). We compared the performance of these deep architectures to that of traditional detection algorithms for the primary vocalization produced by this species, the upcall. We show that deep-learning architectures are capable of producing false-positive rates that are orders of magnitude lower than alternative algorithms while substantially increasing the ability to detect calls. We demonstrate that a deep neural network trained with recordings from a single geographic region recorded over a span of days is capable of generalizing well to data from multiple years and across the species' range, and that the low false positives make the output of the algorithm amenable to quality control for verification. The deep neural networks we developed are relatively easy to implement with existing software, and may provide new insights applicable to the conservation of endangered species.
Animals produce a wide array of sounds with highly variable acoustic structures. It is possible to understand the causes and consequences of this variation across taxa with phylogenetic comparative analyses. Acoustic and evolutionary analyses are rapidly increasing in sophistication such that choosing appropriate acoustic and evolutionary approaches is increasingly difficult. However, the correct choice of analysis can have profound effects on output and evolutionary inferences. Here, we identify and address some of the challenges for this growing field by providing a roadmap for quantifying and comparing sound in a phylogenetic context for researchers with a broad range of scientific backgrounds. Sound, as a continuous, multidimensional trait can be particularly challenging to measure because it can be hard to identify variables that can be compared across taxa and it is also no small feat to process and analyse the resulting high‐dimensional acoustic data using approaches that are appropriate for subsequent evolutionary analysis. Additionally, terminological inconsistencies and the role of learning in the development of acoustic traits need to be considered. Phylogenetic comparative analyses also have their own sets of caveats to consider. We provide a set of recommendations for delimiting acoustic signals into discrete, comparable acoustic units. We also present a three‐stage workflow for extracting relevant acoustic data, including options for multivariate analyses and dimensionality reduction that is compatible with phylogenetic comparative analysis. We then summarize available phylogenetic comparative approaches and how they have been used in comparative bioacoustics, and address the limitations of comparative analyses with behavioural data. Lastly, we recommend how to apply these methods to acoustic data across a range of study systems. In this way, we provide an integrated framework to aid in quantitative analysis of cross‐taxa variation in animal sounds for comparative phylogenetic analysis. In addition, we advocate the standardization of acoustic terminology across disciplines and taxa, adoption of automated methods for acoustic feature extraction, and establishment of strong data archival practices for acoustic recordings and data analyses. Combining such practices with our proposed workflow will greatly advance the reproducibility, biological interpretation, and longevity of comparative bioacoustic studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.