1970
DOI: 10.1044/jshr.1303.548
|View full text |Cite
|
Sign up to set email alerts
|

Consistency of Judgments of Articulatory Productions

Abstract: Undergraduate majors in speech pathology were relatively consistent in their judgments of articulatory productions. Speech pathology majors were more consistent in making judgments of correct productions than in the judgments of incorrect productions; they were more consistent in making judgments of sounds in words than in phrases or trios; and they were more consistent in making judgments between the first and second tests than between the first and third tests. Special training should be given to the identif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

1991
1991
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 0 publications
1
4
0
Order By: Relevance
“…Indeed, /r/ is notoriously difficult to code reliably, even among trained phoneticians; Lawson et al (2014) coded Glaswegian English /r/ into seven categories, and allowing for one category leeway (e.g., counting 'no /r/' and 'derhotic' as agreement), the three authors achieved 84-86% agreement. Comparable findings for other variables have been reported by Irwin (1970) (86-87% test-retest consistency for labeling consonant misarticulations); Pitt, Johnson, Hume, Kiesling, and Raymond (2005) (92.9% inter-coder agreement for stops, 86.5% for liquids); Fosler-Lussier, Dilley, Tyson, and Pitt (2007) (79.4-80.9% inter-coder agreement for stops, 74.7-79.0% for liquids/glides); and Hall-Lew and Fix (2012) (by-token SDs around 0.8-1.0 for coding /l/ vocalization on a 4-point scale). Hall-Lew and Fix ( 2012) also note that the 'intermediate' tokens (those with mean ratings in the middle of the scale) experienced the least reliable ratings.…”
Section: Discussionsupporting
confidence: 82%
“…Indeed, /r/ is notoriously difficult to code reliably, even among trained phoneticians; Lawson et al (2014) coded Glaswegian English /r/ into seven categories, and allowing for one category leeway (e.g., counting 'no /r/' and 'derhotic' as agreement), the three authors achieved 84-86% agreement. Comparable findings for other variables have been reported by Irwin (1970) (86-87% test-retest consistency for labeling consonant misarticulations); Pitt, Johnson, Hume, Kiesling, and Raymond (2005) (92.9% inter-coder agreement for stops, 86.5% for liquids); Fosler-Lussier, Dilley, Tyson, and Pitt (2007) (79.4-80.9% inter-coder agreement for stops, 74.7-79.0% for liquids/glides); and Hall-Lew and Fix (2012) (by-token SDs around 0.8-1.0 for coding /l/ vocalization on a 4-point scale). Hall-Lew and Fix ( 2012) also note that the 'intermediate' tokens (those with mean ratings in the middle of the scale) experienced the least reliable ratings.…”
Section: Discussionsupporting
confidence: 82%
“…See Kiesling et al ͑2006͒ for more details. 3 These labeling consistency data compare favorably with other studies ͑e.g., Irwin, 1970;Eisen, 1991͒. For example, Eisen ͑1991͒ found labeling accuracy of 88% for obstruent consonants.…”
Section: Discussionmentioning
confidence: 96%
“…A more recent unpublished test of intertranscriber reliability using eight labelers and 1 min of speech allowed us to specifically investigate agreement among canonical, deleted, and glottal variants. Agreement for these variants was 85.2%, indicating high reliability in line with previous findings of good interrater agreement ͑e.g., Irwin, 1970;Eisen, 1991͒. 3 The speech of 19 talkers ͑9 male, 10 female; approximately 138 000 words͒ was used to identify lexical sequences constituting assimilable environments, i.e., twoword sequences in which the place of articulation of the word-final phoneme could assimilate to that of a following word-initial phoneme.…”
Section: A Methodsmentioning
confidence: 99%
“…Dating back to Henderson (1938), agreement on two-way decisions (correct versus incorrect) has been shown to be higher than on five-way scoring (correct, deletion, substitution, distortion, addition), presumably due to the increased complexity of the decision process and the lack of systematic response definitions for substitutions, additions, and distortions (Irwin, 1970;Irwin and Krafchick, 1965;Norris et al, 1980;Philips and Bzoch, 1969). Even when agreement in the present study is based on the lenient agreement criterion of any diacritic to mark a distortion, it averages only 48%.…”
Section: T J Y E Systems and Agreement Criteriamentioning
confidence: 96%
“…Irwin, 1970;Pye et al, 1988) when transcribed using broad phonetic transcription. Using the same criteria, only 3 of the 24 consonants (13%) and 9 of the 17 vowels (53 YO) had acceptable agreement when transcribed using narrow phonetic transcription.…”
Section: Consonant and Vowel Agreementmentioning
confidence: 99%