2005
DOI: 10.1177/00238309050480030301
|View full text |Cite
|
Sign up to set email alerts
|

Judgment of Disfluency in People who Stutter and People who do not Stutter: Results from Magnitude Estimation

Abstract: Two experiments used a magnitude estimation paradigm to test whether perception of disfluency is a function of whether the speaker and the listener stutter or do not stutter. Utterances produced by people who stutter were judged as "less fluent," and, critically, this held for apparently fluent utterances as well as for utterances identified as containing disfluency. Additionally, people who stutter tended to perceive utterances as less fluent, independent of who produced these utterances. We argue that these … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
14
1

Year Published

2006
2006
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 30 publications
3
14
1
Order By: Relevance
“…Further, Postma and Kolk [43] found AWS were equally accurate when identifying their speech errors under auditory masking as AWNS, suggesting that any self-monitoring differences in AWS are not perception-based, but instead associated with a phonological processing deficit (see also [44]). In this same study, however, and in contrast to Lickley et al [41], AWS detected significantly fewer errors in third-party speech than AWNS, suggesting that AWS may have simply generated fewer phonological errors than AWNS. Thus, available data indicate that error monitoring in AWS may be atypical compared to AWNS (cf.…”
Section: Speech Monitoring In Awscontrasting
confidence: 86%
See 2 more Smart Citations
“…Further, Postma and Kolk [43] found AWS were equally accurate when identifying their speech errors under auditory masking as AWNS, suggesting that any self-monitoring differences in AWS are not perception-based, but instead associated with a phonological processing deficit (see also [44]). In this same study, however, and in contrast to Lickley et al [41], AWS detected significantly fewer errors in third-party speech than AWNS, suggesting that AWS may have simply generated fewer phonological errors than AWNS. Thus, available data indicate that error monitoring in AWS may be atypical compared to AWNS (cf.…”
Section: Speech Monitoring In Awscontrasting
confidence: 86%
“…Nevertheless, all 3 theories cite evidence for overly critical evaluation of internal and external speech production by AWS. For example, Lickley et al [41] found AWS are more likely to judge external speech as less fluent regardless of whether the speaker is an AWS or a typically fluent adult. Although these participants provided subjective ratings of “fluency,” the authors interpreted AWS’s propensity to judge third-party speech that was ostensibly fluent more harshly than AWNS as evidence for the VCH [32].…”
Section: Speech Monitoring In Awsmentioning
confidence: 99%
See 1 more Smart Citation
“…Verbal self-monitoring is a crucial part of language production, especially when one considers that producing speech errors hampers the fluency of speech and can sometimes lead to embarrassment, for instance, when taboo words are uttered unintentionally (Motley, Camden, & Baars, 1982). Furthermore, malfunction of verbal monitoring is often implicated in disorders such as aphasia (for an overview, see Oomen, Postma, & Kolk, 2001), stuttering (Lickley, Hartsuiker, Corley, Russell, & Nelson, 2005), and schizophrenia (for an overview, see Seal, Aleman, & McGuire, 2004).…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, verbal-monitoring is often implicated in disorders such as aphasia (for an overview see , stuttering (Lickley et al, 2005), and schizophrenia (for overview see Seal et al, 2004). …”
Section: Introductionmentioning
confidence: 99%