2022
DOI: 10.1101/2022.12.19.521085
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Is there an accurate and generalisable way to use soundscapes to monitor biodiversity?

Abstract: Acoustic monitoring has the potential to deliver biodiversity insight on vast scales. Whilst autonomous recording networks are being deployed across the world, existing analytical techniques struggle with generalisability. This limits the insight that can be derived from audio recordings in regions without ground-truth calibration data. By calculating 128 learned features and 60 soundscape indices of audio recorded during 8,023 avifaunal point counts from diverse ecosystems, we investigated the generalisabilit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 25 publications
1
15
0
Order By: Relevance
“…However, there are contradictory patterns reported in the literature. There do not appear to be any acoustic indices that hold a consistent relationship with species richness or density across regions or taxonomic communities, limiting their potential as biodiversity proxies (Alcocer et al, 2022; Sethi et al, 2023). Some contradictory findings may arise from issues in research design (see considerations below), but other inconsistencies reflect soundscape differences among species, acoustic communities, biomes and ecosystems (Barbaro et al, 2022; Buxton et al, 2018; Eldridge et al, 2018).…”
Section: Identifying Your Paradigmmentioning
confidence: 99%
See 2 more Smart Citations
“…However, there are contradictory patterns reported in the literature. There do not appear to be any acoustic indices that hold a consistent relationship with species richness or density across regions or taxonomic communities, limiting their potential as biodiversity proxies (Alcocer et al, 2022; Sethi et al, 2023). Some contradictory findings may arise from issues in research design (see considerations below), but other inconsistencies reflect soundscape differences among species, acoustic communities, biomes and ecosystems (Barbaro et al, 2022; Buxton et al, 2018; Eldridge et al, 2018).…”
Section: Identifying Your Paradigmmentioning
confidence: 99%
“…As with all ecological data, exploratory analyses are necessary to sense‐check apparent trends against the patterns that acoustic indices may reflect (Table S1 and the references therein). This is particularly important when associating indices with biodiversity metrics; our current understanding is that there is no widely generalisable relationship between acoustic diversity and biodiversity (Alcocer et al, 2022; Sethi et al, 2023). Moreover, ‘biodiversity’ includes a broad range of different metrics, which themselves should be carefully selected for each ecological question.…”
Section: Interpreting Acoustic Indicesmentioning
confidence: 99%
See 1 more Smart Citation
“…The success of deep learning (DL) in real world audio data such as human speech recognition [30, 16, 40] and music generation [19, 36, 4] means many are now looking to deep neural networks for automated soundscape analysis [11]. Convolutional neural networks (CNNs) have proven successful at bioacoustic segmentation and classification tasks [20, 26] and early applications to generate whole soundscape descriptors show some promise [8, 41, 44, 42]. Audio CNNs learn a hierarchy of feature maps to embed latent representations that capture key characteristics of acoustic events.…”
Section: Introductionmentioning
confidence: 99%
“…Several recent papers have trialled using a CNN model VGGish pre-trained by supervision to extract features from PAM audio spectrograms [41, 44, 42, 43]. However, VGGish’s training data and pre-processing pipeline hard-code prior assumptions about the feature recognition task that may not necessarily be suitable for ecoacoustics.…”
Section: Introductionmentioning
confidence: 99%