2017
DOI: 10.1007/978-3-319-53547-0_21
|View full text |Cite
|
Sign up to set email alerts
|

Psychophysical Evaluation of Audio Source Separation Methods

Abstract: Abstract. Source separation evaluation is typically a top-down process, starting with perceptual measures which capture fitness-for-purpose and followed by attempts to find physical (objective) measures that are predictive of the perceptual measures. In this paper, we take a contrasting bottom-up approach. We begin with the physical measures provided by the Blind Source Separation Evaluation Toolkit (BSS Eval) and we then look for corresponding perceptual correlates. This approach is known as psychophysics and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Cartwight et al [11] repeated the original PEASS experiment [4], and found consistent positive correlations for all four BSS Eval statistics (PEASS was not assessed), with the highest being around 0.75 for SIR (interference) and 0.55 for SAR (artificial noise). Finally, Simpson et al [12] asked listeners to rate the overall similarity of 10 vocal segments, whilst ignoring the accompaniment, extracted by five algorithms against the original source. They carried out a second experiment in which listeners judged the amount of interference indirectly by rating the similarity of the vocal-toaccompaniment loudness ratio to that of the original mixture.…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Cartwight et al [11] repeated the original PEASS experiment [4], and found consistent positive correlations for all four BSS Eval statistics (PEASS was not assessed), with the highest being around 0.75 for SIR (interference) and 0.55 for SAR (artificial noise). Finally, Simpson et al [12] asked listeners to rate the overall similarity of 10 vocal segments, whilst ignoring the accompaniment, extracted by five algorithms against the original source. They carried out a second experiment in which listeners judged the amount of interference indirectly by rating the similarity of the vocal-toaccompaniment loudness ratio to that of the original mixture.…”
Section: Previous Workmentioning
confidence: 99%
“…These effect sizes indicate that APS yields the strongest relationship with the subjective sound-quality ratings, with ISR performing the worst. Given that previous studies have found associations between SAR and sound-quality perception [11,12], it is interesting to compare this metric with APS. Fig.…”
Section: Objective Metricsmentioning
confidence: 99%
“…In [20], subjective tests were conducted to investigate the extent to which users were satisfied by personalising objectbased content, with a source separation scenario considered. The MARuSS (Musical Audio Repurposing using Source Separation) project has worked on the problem of musical remix and upmix using deep learning-based BSS [16], including separation of vocals from the remainder of the mix [18], and perceptual evaluation of BSS in the context of remixing [17,22].…”
Section: Introductionmentioning
confidence: 99%