2020
DOI: 10.1044/2020_jslhr-19-00324
|View full text |Cite
|
Sign up to set email alerts
|

Suprathreshold Differences in Competing Speech Perception in Older Listeners With Normal and Impaired Hearing

Abstract: Purpose Age-related declines in auditory temporal processing and cognition make older listeners vulnerable to interference from competing speech. This vulnerability may be increased in older listeners with sensorineural hearing loss due to additional effects of spectral distortion and accelerated cognitive decline. The goal of this study was to uncover differences between older hearing-impaired (OHI) listeners and older normal-hearing (ONH) listeners in the perceptual encoding of competing speech s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 96 publications
1
11
0
Order By: Relevance
“…After scanning, the filters themselveseach summarizing the spectrotemporal patterns retained in each sentenceare weighted by the response elicited by the corresponding sentence and summed to produce a STRF. Essentially, the weighted sum is a linear model (cf., Venezia, Hickok, & Richards, 2016;Venezia, Leek, & Lindeman, 2020;Venezia, Martin, Hickok, & Richards, 2019a) whose predictors are the binary filters and whose criterion is the sentence-by-sentence fMRI response magnitude (i.e., event-related beta time series). A schematic of this process is shown in Figure 1C.…”
Section: Overviewmentioning
confidence: 99%
“…After scanning, the filters themselveseach summarizing the spectrotemporal patterns retained in each sentenceare weighted by the response elicited by the corresponding sentence and summed to produce a STRF. Essentially, the weighted sum is a linear model (cf., Venezia, Hickok, & Richards, 2016;Venezia, Leek, & Lindeman, 2020;Venezia, Martin, Hickok, & Richards, 2019a) whose predictors are the binary filters and whose criterion is the sentence-by-sentence fMRI response magnitude (i.e., event-related beta time series). A schematic of this process is shown in Figure 1C.…”
Section: Overviewmentioning
confidence: 99%
“…These paradigms have been shown to be powerful tests of listening in complex environments because of their sensitivity to small intelligibility changes in highly noisy backgrounds, their applicability to testing with different maskers, and their relative independence from semantic/syntactic cues ( Brungart, 2001 ; De Sousa et al., 2020 ; Eddins & Liu, 2012 ; Humes et al., 2017 ). Accumulating work demonstrates that speech reception thresholds (SRTs) estimated with an adaptive CRM task correlate with audiometric thresholds and with age ( de Kerangal et al., 2020 ; Schoof & Rosen, 2014 ; Venezia et al., 2020 ), rendering it a potentially efficient proxy of hearing ability ( Semeraro et al., 2017 ). An additional advantage is that the task relies on manipulating the relative intensity of the target and the masker, and performance is largely independent of overall level over a reasonable range.…”
mentioning
confidence: 99%
“…There is now converging evidence that the human auditory system relies on MPS representations to analyze complex suprathreshold signals such as speech. Physiological studies have shown that the central auditory system exhibits specialized tuning to STMs ( Hullett et al., 2016 ; Santoro et al., 2017 ), and behavioral studies have demonstrated that speech intelligibility is conveyed by STMs within specific ranges of temporal (1–10 Hz) and spectral (1–2 cycl/oct) modulations ( Elliott & Theunissen, 2009 ; Venezia et al., 2016 , 2020 ). Furthermore, results from modeling studies have shown that cortical auditory models or metrics such as the STM index, all based on decomposition of auditory signals through an STM filter bank, provide accurate accounts of SIN intelligibility scores ( Bernstein et al., 2013b ; Chi et al., 1999 ; Elhilali et al., 2003 ).…”
mentioning
confidence: 99%