2019
DOI: 10.1101/744813
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural signatures of disordered multi-talker speech perception in adults with normal hearing

Abstract: 1In social settings, speech waveforms from nearby speakers mix together in our ear canals. The brain 2 unmixes the attended speech stream from the chorus of background speakers using a combination of fast 3 temporal processing and cognitive active listening mechanisms. Multi-talker speech perception is 4 vulnerable to aging or auditory abuse. We found that ~10% of adult visitors to our clinic have no 5 measurable hearing loss, yet offer a primary complaint of poor hearing. Multi-talker speech intelligibility 6… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 97 publications
0
6
0
Order By: Relevance
“…Modulations in frequency (FM) and amplitude (AM) carry critical information in biologically relevant sounds, such as speech, music, and animal vocalizations (Attias and Schreiner, 1997;Nelken et al, 1999). In humans, AM is crucial for understanding speech in quiet (Shannon et al, 1995;Smith et al, 2002), while FM is particularly important for perceiving melodies, recognizing talkers, determining speech prosody and emotion, and may aid in the perception of speech presented in competing background sounds (Zeng et al, 2005;Strelcyk and Dau, 2009;Sheft et al, 2012;Johannesen et al, 2016;Lopez-Poveda et al, 2017;Parthasarathy et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Modulations in frequency (FM) and amplitude (AM) carry critical information in biologically relevant sounds, such as speech, music, and animal vocalizations (Attias and Schreiner, 1997;Nelken et al, 1999). In humans, AM is crucial for understanding speech in quiet (Shannon et al, 1995;Smith et al, 2002), while FM is particularly important for perceiving melodies, recognizing talkers, determining speech prosody and emotion, and may aid in the perception of speech presented in competing background sounds (Zeng et al, 2005;Strelcyk and Dau, 2009;Sheft et al, 2012;Johannesen et al, 2016;Lopez-Poveda et al, 2017;Parthasarathy et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Modulations in frequency (FM) and amplitude (AM) carry critical information in biologically relevant sounds, such as speech, music, and animal vocalizations ( Attias and Schreiner, 1997 ; Nelken et al, 1999 ). In humans, AM is crucial for understanding speech in quiet ( Shannon et al, 1995 ; Smith et al, 2002 ), while FM is particularly important for perceiving melodies, recognizing talkers, determining speech prosody and emotion, and may aid in the perception of speech presented in competing background sounds ( Zeng et al, 2005 ; Strelcyk and Dau, 2009 ; Sheft et al, 2012 ; Johannesen et al, 2016 ; Lopez-Poveda et al, 2017 ; Parthasarathy et al, 2019 ). The perception of FM at both slow and fast modulation rates is often degraded in older people and those with hearing loss ( Lacher-Fougère and Demany, 1998 ; Moore and Skrodzka, 2002 ; He et al, 2007 ; Strelcyk and Dau, 2009 ; Grose and Mamo, 2012 ; Paraouty et al, 2016 ; Wallaert et al, 2016 ; Paraouty and Lorenzi, 2017 ; Whiteford et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Either the instantaneous amplitude of the signal is encoded by phase-locked firing (i.e., TFS of the stimulus) or FM is converted to amplitude modulation (AM) by changes in the output of the auditory filters fixed in place along the cochlear partition ( Zwicker, 1956 ; Khanna and Teich, 1989 ; Sęk and Moore, 1995 ; Moore and Sęk, 1996 ). Differences in the relationship between the modulation frequency and detection for FM and AM signals have previously been interpreted as evidence for the importance of TFS cues for FM detection at low modulation frequencies ( Rose et al , 1967 ; Moore and Sęk, 1995 ; Parthasarathy et al , 2019 ). However, there is converging evidence that place cues alone may be able to explain FM detection even at low modulation rates.…”
Section: Introductionmentioning
confidence: 99%
“…Attending to a single conversation partner in the presence of multiple distracting talkers (i.e., the Cocktail Party Problem, CPP) is a complicated and difficult task for machines and humans [1][2][3]. While some normal-hearing listeners can accomplish this task with relative ease, other groups of listeners report great difficulty -such as those with sensorineural hearing loss [4][5][6], cochlear implant users [7][8][9][10], subgroups of children [11] and adults with "hidden hearing loss" [12][13][14]. At a cocktail party, talkers are distributed in space, and normal-hearing listeners appear to make use of spatial cues (i.e., interaural timing and level differences, or ITDs and ILDs, respectively) to perceptually localize and segregate sound mixtures into individual spatial components.…”
Section: Introductionmentioning
confidence: 99%