2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8462153
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced Pairwise-Comparison for Source Separation Evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…We collect a dataset of human judgments using modern crowdsourcing tools, which have been shown to perform similarly to expert, in-lab tests [14,15]. We present a listener with two recordings, a reference x ref and perturbed signal x per , and ask if these two audio clips are exactly same or different, and record the binary response h ∈ {0, 1}.…”
Section: Data Collection Methodologymentioning
confidence: 99%
“…We collect a dataset of human judgments using modern crowdsourcing tools, which have been shown to perform similarly to expert, in-lab tests [14,15]. We present a listener with two recordings, a reference x ref and perturbed signal x per , and ask if these two audio clips are exactly same or different, and record the binary response h ∈ {0, 1}.…”
Section: Data Collection Methodologymentioning
confidence: 99%
“…As we do not have access to the source tracks, we cannot evaluate the separation performance using common objective evaluation metrics. Instead, we conduct a subjective evaluation on the source separation quality (Cartwright et al, 2016(Cartwright et al, , 2018 over 51 people. Some subjects are students or faculty from the University of Rochester, others are subscribers from the International Society for Music Information Retrieval (ISMIR) community.…”
Section: Subjective Evaluation On Professional a Cappella Songsmentioning
confidence: 99%