2015
DOI: 10.1016/j.jcomdis.2014.11.003
|View full text |Cite
|
Sign up to set email alerts
|

Online crowdsourcing for efficient rating of speech: A validation study

Abstract: Blinded listener ratings are essential for valid assessment of interventions for speech disorders, but collecting these ratings can be time-intensive and costly. This study evaluated the validity of speech ratings obtained through online crowdsourcing, a potentially more efficient approach. 100 words from children with /r/ misarticulation were electronically presented for binary rating by 35 phonetically trained listeners and 205 naïve listeners recruited through the Amazon Mechanical Turk (AMT) crowdsourcing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

4
61
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 80 publications
(65 citation statements)
references
References 35 publications
4
61
0
Order By: Relevance
“…To quantify the impact of cooling on speech quality for all recorded sound files, we used an online crowdsourcing approach adapted from a recently validated method (McAllister Byun et al, 2015), in which each vocalization was rated on a visual analog scale (VAS) (Munson et al, 2012) from 0 (‘Extremely degraded’) to 1 (‘Typical/Normal’). Each subject’s sound files were evaluated by 20.4 ±1.0 online participants.…”
Section: Resultsmentioning
confidence: 99%
“…To quantify the impact of cooling on speech quality for all recorded sound files, we used an online crowdsourcing approach adapted from a recently validated method (McAllister Byun et al, 2015), in which each vocalization was rated on a visual analog scale (VAS) (Munson et al, 2012) from 0 (‘Extremely degraded’) to 1 (‘Typical/Normal’). Each subject’s sound files were evaluated by 20.4 ±1.0 online participants.…”
Section: Resultsmentioning
confidence: 99%
“…Table 2 reports individual percentages of items rated correct on this word probe, subdivided into vocalic and consonantal variants. Elsewhere in this paper, participants’ rhotic production accuracy will be estimated using perceptual ratings obtained from naïve listeners, who tend to be more lenient in their ratings of children’s rhotic sounds than trained experts (McAllister Byun et al, 2015). …”
Section: Methodsmentioning
confidence: 99%
“…Numerous studies have reported that results obtained through AMT are comparable to those obtained in a lab-based setting (e.g., Paolacci et al, 2010; Sprouse, 2011; Crump et al, 2013). McAllister Byun et al (2015) investigated the validity of crowdsourced data collection in the specific context of ratings of children’s productions of rhotic sounds. They found that binary ratings aggregated over 250 naïve listeners on AMT were highly correlated with binary ratings aggregated over 25 expert listeners ( r = 0.92) and with an acoustic measure of rhoticity, F3-F2 distance ( r = -0.79).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Rhotics are distinguished acoustically from other sonorants by the low height of the third formant (F3), which closely approximates the second formant (F2). Both case studies (Shuster, Ruscello, & Smith, 1992;Shuster, Ruscello, & Toth, 1995) and single-subject experimental studies (McAllister Byun, 2017;McAllister Byun & Campbell, 2016;McAllister Byun, Halpin, & Szeredi, 2015;McAllister Byun & Hitchcock, 2012) have reported that visual-acoustic biofeedback featuring a lowered F3 target can improve rhotic production in speakers who have not responded to other forms of intervention. One caution that has been raised in previous studies of various types of biofeedback (e.g., Gibbon & Paterson, 2006;McAllister Byun & Hitchcock, 2012;Preston et al, 2014) is that gains made in the treatment setting do not automatically generalize to contexts in which enhanced feedback is not available.…”
mentioning
confidence: 99%