2020
DOI: 10.1007/978-3-030-49367-7_6
|View full text |Cite
|
Sign up to set email alerts
|

The Aging Auditory System: Electrophysiology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 101 publications
0
6
0
1
Order By: Relevance
“…By averaging the EEG signal in response to a large number of repetitive sound or speech stimuli, peaks in specific time ranges have been consistently identified in neurotypicals, i.e., event-related potentials (ERPs), offering a window into the spatio-temporal patterns of the neural response to speech. This way, ERPs related to acoustic (e.g., P1-N1-P2 complex (e.g., Martin et al, 2008; Harris, 2020)) and linguistic (e.g., N400 (e.g., Hillyard and Kutas, 1984; Kutas and Federmeier, 2011; Nieuwland et al, 2020)) aspects of speech have been identified. In IWA, altered ERPs have been found across language processing levels and across a variety of experimental stimuli and tasks (Ofek et al, 2013; Becker and Reinvang, 2007; Ilvonen et al, 2001; Pulvermü ller et al, 2004; Aerts et al, 2015; Ilvonen et al, 2004; Pettigrew et al, 2005; Robson et al, 2017; Chang et al, 2016; Kawohl et al, 2010; Khachatryan et al, 2017; Sheppard et al, 2017; Lice and Palmović, 2017; Kielar et al, 2012; Räling et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…By averaging the EEG signal in response to a large number of repetitive sound or speech stimuli, peaks in specific time ranges have been consistently identified in neurotypicals, i.e., event-related potentials (ERPs), offering a window into the spatio-temporal patterns of the neural response to speech. This way, ERPs related to acoustic (e.g., P1-N1-P2 complex (e.g., Martin et al, 2008; Harris, 2020)) and linguistic (e.g., N400 (e.g., Hillyard and Kutas, 1984; Kutas and Federmeier, 2011; Nieuwland et al, 2020)) aspects of speech have been identified. In IWA, altered ERPs have been found across language processing levels and across a variety of experimental stimuli and tasks (Ofek et al, 2013; Becker and Reinvang, 2007; Ilvonen et al, 2001; Pulvermü ller et al, 2004; Aerts et al, 2015; Ilvonen et al, 2004; Pettigrew et al, 2005; Robson et al, 2017; Chang et al, 2016; Kawohl et al, 2010; Khachatryan et al, 2017; Sheppard et al, 2017; Lice and Palmović, 2017; Kielar et al, 2012; Räling et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…A comprehensive review of age-related electrophysiological changes in the central auditory pathway can be found in 91 . Early auditory evoked potentials, especially the so-called frequency following response (FFR) after stimulation with both tone and speech signals, objectify impaired temporal processing at the brainstem level.…”
Section: Age-related Hearing Lossmentioning
confidence: 99%
“…Eine umfassende Übersicht altersabhängiger elektrophysiologischer Veränderungen der zentralen Hörbahn findet sich bei 91 : Frühe auditorisch evozierter Potentiale, insbesondere der sog. frequency following response (FFR) sowohl nach Stimulation mit Tönen als auch Sprachsignalen, objektivieren die gestörte zeitliche Verarbeitung auf Hirnstammebene.…”
Section: 32 Veränderungen Der Zentral-auditiven Verarbeitung Und Wahr...unclassified
“…By using EEG or magnetoencephalography (MEG), the neural response to auditory stimuli can be examined by averaging the signal over a large number of repetitive stimuli (i.e., event-related potentials (ERP)). Cortical auditory evoked potentials (CAEP), such as the P1-N1-P2 complex, reflect sound detection and encoding in the auditory cortex (Martin et al, 2008;Harris, 2020). In clinically normal-hearing older adults, CAEP peak amplitudes and latencies (N1 and P2) are increased compared to younger adults (McCullagh and Shinn, 2013;Tremblay et al, 2002Tremblay et al, , 2003.…”
Section: Age-related Changes In Acoustic Speech Processingmentioning
confidence: 99%