2021
DOI: 10.48550/arxiv.2111.02006
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Strongly-Labelled Polyphonic Dataset of Urban Sounds with Spatiotemporal Context

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…For example, the USM-SED dataset generated 20,000 polyphonic soundscapes mixing sounds from the FSD50K dataset [ 31 ]. On the other hand, some datasets have been compiled through a set of recordings, usually made by a single user or network such as SONYC-UST [ 32 ] consisting of 3068 recordings from an acoustic sensor network deployed in New York City including 23 different sound classes tagged by volunteers or SINGA:PURA [ 33 ] with a total of 18.2 h of audio data recorded through a wireless acoustic sensor network deployed in the city of Singapore. Both SONYC-UST and SINGA:PURA tagged the sound events using a hierarchical taxonomy.…”
Section: Related Workmentioning
confidence: 99%
“…For example, the USM-SED dataset generated 20,000 polyphonic soundscapes mixing sounds from the FSD50K dataset [ 31 ]. On the other hand, some datasets have been compiled through a set of recordings, usually made by a single user or network such as SONYC-UST [ 32 ] consisting of 3068 recordings from an acoustic sensor network deployed in New York City including 23 different sound classes tagged by volunteers or SINGA:PURA [ 33 ] with a total of 18.2 h of audio data recorded through a wireless acoustic sensor network deployed in the city of Singapore. Both SONYC-UST and SINGA:PURA tagged the sound events using a hierarchical taxonomy.…”
Section: Related Workmentioning
confidence: 99%