2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8461833
|View full text |Cite
|
Sign up to set email alerts
|

Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
41
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(43 citation statements)
references
References 11 publications
1
41
1
Order By: Relevance
“…The new label collection aimed to resolve all three of these issues. Instead of simple present/not present checkboxes, the annotators interacted with multiple timelines alongside a spectrogram representation, on which they could drag out time regions indicating the extent of each sound event (comparable to Audio Annotator [17]). Annotators quickly became adept at marking time regions on this display and we judge their timings to be precise at least to 0.1 sec resolution (based on informal spot-checks).…”
Section: Strong-labeled Datasetmentioning
confidence: 99%
“…The new label collection aimed to resolve all three of these issues. Instead of simple present/not present checkboxes, the annotators interacted with multiple timelines alongside a spectrogram representation, on which they could drag out time regions indicating the extent of each sound event (comparable to Audio Annotator [17]). Annotators quickly became adept at marking time regions on this display and we judge their timings to be precise at least to 0.1 sec resolution (based on informal spot-checks).…”
Section: Strong-labeled Datasetmentioning
confidence: 99%
“…Crowdsourcing studies aimed at eliciting labels for training datasets from non-experts have also shown how differences in how information is shown [3] or what sorts of interruptions a crowdworker experiences [55] can impact the quality of labels obtained. Others have explored the value of incentivizing label and other data collection from non-experts according to their usefulness for model development (e.g., [56]).…”
Section: Knowledge Elicitation For Expert Decision Makingmentioning
confidence: 99%
“…Fields like judgment and decision making, in which eliciting (often probabilistic) prior beliefs is a topic of study, have provided evidence that how knowledge is elicited can affect the usefulness of the information. Some evidence of the effects of elicitation approaches can also be found in sub-areas of computer science that have focused mostly on non-experts like crowdsourced labeling (e.g., [3]) and active learning (e.g., [4]). However, to date few attempts have been made to characterize the space of decisions that ML researchers and practitioners make in eliciting domain knowledge from experts, and elicitation itself is rarely a topic in ML research.…”
Section: Introductionmentioning
confidence: 99%
“…It has been reported that expert annotators do not even have to listen to audio signals, and perform the annotation task only with spectrograms [41]. However, despite the efficiency on identifying changes in the audio event, past studies have also suggested that it requires some experiences to recognize and interpret the visual patterns of spectrogram representation [12]. In the context of interactive machine learning for novice users, it is not clear whether spectrograms can be used to inspect audio contents and whether the spectrogram representation is the best way for information visualization.…”
Section: Sound Recognition and Annotationmentioning
confidence: 99%