2022
DOI: 10.3390/s22218361
|View full text |Cite
|
Sign up to set email alerts
|

A Review of Automated Bioacoustics and General Acoustics Classification Research

Abstract: Automated bioacoustics classification has received increasing attention from the research community in recent years due its cross-disciplinary nature and its diverse application. Applications in bioacoustics classification range from smart acoustic sensor networks that investigate the effects of acoustic vocalizations on species to context-aware edge devices that anticipate changes in their environment adapt their sensing and processing accordingly. The research described here is an in-depth survey of the curr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 154 publications
0
10
0
Order By: Relevance
“…Broadly, the process requires recording sound (multiple vocalisations per individual on separate occasions), extracting and quantifying their acoustic features ('feature extraction') from the recording and using these features to classify the vocalisation as belonging to one of a number of possible individuals 7 . Recently, with greater computational power becoming more and more accessible, new, automated methods for feature extraction and classification are becoming increasingly popular [8][9][10] . The ease of application of various classifiers, including those using machine learning techniques, has provided many opportunities within the field of bioacoustics [11][12][13] .…”
Section: Introductionmentioning
confidence: 99%
“…Broadly, the process requires recording sound (multiple vocalisations per individual on separate occasions), extracting and quantifying their acoustic features ('feature extraction') from the recording and using these features to classify the vocalisation as belonging to one of a number of possible individuals 7 . Recently, with greater computational power becoming more and more accessible, new, automated methods for feature extraction and classification are becoming increasingly popular [8][9][10] . The ease of application of various classifiers, including those using machine learning techniques, has provided many opportunities within the field of bioacoustics [11][12][13] .…”
Section: Introductionmentioning
confidence: 99%
“…Autonomous devices such as camera traps offer a wide range of options for accessible and non-invasive biomonitoring (Steenweg et al 2017;Rudolfi and Poerting 2020); however, they generate large volumes of data, which can be difficult to manage (Norouzzadeh et al 2018). With the increasing utilization of camera traps (Green et al 2020) and audio recorders (Mutanu et al 2022), the amount of stored data (images, videos, acoustic data) collected is rapidly expanding. Scientists and conservationists need to meet the challenge of processing such data streams, and if possible, in real time, which is essential for nature conservation issues such as anti-poaching (Tan et al 2016;Heyns 2021) and detecting biodiversity trends at a local and global scale (Chandler et al 2017;Steenweg et al 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Automated bioacoustics models, used to monitor animals by the sounds they emit, have numerous recognized applications to conservation science, including early detection of habitat deterioration, inference of dispersal, and assessment of population density and diversity, though these advances have primarily focused on vertebrates such as birds, elephants and whales (Laiolo, 2010). However, bioacoustics has tremendous potential for enabling rapid, replicable, and cost‐effective monitoring of insect communities at large temporal and spatial scales in a non‐invasive manner (Mankin et al., 2021; Mutanu et al., 2022). The primary goal of this study is to provide ecologists, entomologists and conservation practitioners with a systematic review of the literature on automated insect bioacoustics modelling.…”
Section: Introductionmentioning
confidence: 99%
“…This paradigm shift progresses away from methods involving a substantial amount of human‐directed data preprocessing (“feature engineering”: manual characterization and/or extraction of salient acoustic features) towards methods that learn those features as part of the ML pipeline itself. We define features as the input data used for modelling, such as the peak frequency or pulse duration, that are extracted from spectrograms or waveforms to reduce the dimensions of the audio data and provide the model with only relevant material (Mutanu et al., 2022; Stowell, 2022). Increasingly, feature extraction/input reduction is removed from the pipeline, presenting less preprocessed and higher‐dimensional data for models to learn from.…”
Section: Introductionmentioning
confidence: 99%