2017
DOI: 10.1121/1.4986746
|View full text |Cite
|
Sign up to set email alerts
|

A relationship between processing speech in noise and dysarthric speech

Abstract: There is substantial individual variability in understanding speech in adverse listening conditions. This study examined whether a relationship exists between processing speech in noise (environmental degradation) and dysarthric speech (source degradation), with regard to intelligibility performance and the use of metrical stress to segment the degraded speech signals. Ninety native speakers of American English transcribed speech in noise and dysarthric speech. For each type of listening adversity, transcripti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
25
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 36 publications
(28 citation statements)
references
References 30 publications
2
25
1
Order By: Relevance
“…In English, strong syllables (those receiving relative stress through longer duration, fundamental frequency change, increased loudness, and a relatively full vowel) can be used to identify the onset of a new word (Cutler & Norris, 1988). Exploiting this statistical structure has been shown to be particularly useful in adverse listening conditions such as speech in noise (M. R. Smith, Cutler, Butterfield, & Nimmo-Smith, 1989) and dysarthric speech (Borrie, Baese-Berk, et al, 2017), although large individual variation in the degree to which listeners exploit this strategy has been observed (Borrie, Baese-Berk, et al, 2017). In languages such as Spanish, however, speakers do not produce large differences in syllable stress; rather, syllables are relatively isochronous (White & Mattys, 2007).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In English, strong syllables (those receiving relative stress through longer duration, fundamental frequency change, increased loudness, and a relatively full vowel) can be used to identify the onset of a new word (Cutler & Norris, 1988). Exploiting this statistical structure has been shown to be particularly useful in adverse listening conditions such as speech in noise (M. R. Smith, Cutler, Butterfield, & Nimmo-Smith, 1989) and dysarthric speech (Borrie, Baese-Berk, et al, 2017), although large individual variation in the degree to which listeners exploit this strategy has been observed (Borrie, Baese-Berk, et al, 2017). In languages such as Spanish, however, speakers do not produce large differences in syllable stress; rather, syllables are relatively isochronous (White & Mattys, 2007).…”
Section: Discussionmentioning
confidence: 99%
“…Research examining listener performance in multiple types of adverse listening conditions has in some cases shown relationships between conditions (e.g., Borrie, Baese-Berk, Van Engen, & Bent, 2017), but in other cases has shown disassociations (e.g., Bent et al, 2016). The aim of the present study was to examine multiple types of adverse listening conditions and the potential cognitive, linguistic, and perceptual skills that support success in each condition.…”
Section: Differences Between Types Of Adverse Listening Conditionsmentioning
confidence: 94%
“…Although it cannot be known for certain whether the homogeneity of listener participants in Experiment 1 was the primary factor in the observed differences between the two experiments, it remains an important point that there may be an influence of the particular demographic makeup of the subject population for speech perception studies, and findings of studies that use a homogenous group of subjects should be interpreted with caution. Crowdsourcing of speech perception studies is becoming increasingly common (e.g., Borrie, Baese-Berk, Van Engen, & Bent, 2017;Yoho & Borrie, 2018), and has been validated as an effective and reliable means of collecting human listener data (Lansford et al, 2016;McAllister Byun, Halpin, & Szeredi, 2015;Stole & Strand, 2016). An analysis of the laboratory-based and crowdsourced data collection in the current study confirmed this, showing that even though the specific pattern of results differed between the laboratory-based and crowdsourced data, the overall rating scores for the different talker and listener groups were not significantly different.…”
Section: Discussionmentioning
confidence: 99%
“…However, compelling comparable results have been found with data collected via MTurk and data collected in the laboratory, including studies involving speech perception in adverse conditions, such as perception of disordered speech and speech in background noise (Cooke et al, 2011;Lansford et al, 2016;McAllister Byun et al, 2015;Slote and Strand, 2016). As such, a number of studies in speech perception in adverse conditions have gone on to make use of data collection via MTurk (e.g., Borrie et al, 2017aBorrie et al, , 2017b). While we did not compare data collection environments, the data collected in the current study displayed a surprisingly small degree of variability across the 20 listener participants in each testing condition (see error bars on Fig.…”
Section: Discussionmentioning
confidence: 56%
“…Phrases were all six syllables in length and ranged from three to five words. These phrases, which reduce the influence of lexical cues on perceptual processing, were created specifically for examining speech perception in adverse conditions (Liss et al, 1998) and have been used extensively in the study of perception of dysarthric speech (e.g., Borrie et al, 2012;Borrie et al, 2017a). Two 72-yr-old male native talkers of American English, one with dysarthria and one age-matched neurologically healthy control, produced the stimuli for the study.…”
Section: B Speech Stimulimentioning
confidence: 99%