2023
DOI: 10.1162/nol_a_00089
|View full text |Cite
|
Sign up to set email alerts
|

Dynamics of Functional Networks for Syllable and Word-Level Processing

Abstract: Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 108 publications
0
2
1
Order By: Relevance
“…Interestingly, we found a similar phase-locking enhancement at 4 Hz, the dominant syllable rate typical for many languages (Ding et al, 2017;Greenberg et al, 2003;Greenberg et al, 1996;Tilsen & Johnson, 2008), that was evident in English speakers but absent in Chinese speakers. This contradicts previous assertions that cross-linguistic differences in neural-acoustic synchronization only appear at the supra-syllabic (but not syllabic) level (Blanco-Elorrieta et al, 2020;Ding et al, 2016;Rimmele et al, 2023), simply because the latter is similar across languages (Ding et al, 2017).…”
Section: Multilevel Brain-to-speech Synchronization Is Optimized For ...contrasting
confidence: 99%
See 1 more Smart Citation
“…Interestingly, we found a similar phase-locking enhancement at 4 Hz, the dominant syllable rate typical for many languages (Ding et al, 2017;Greenberg et al, 2003;Greenberg et al, 1996;Tilsen & Johnson, 2008), that was evident in English speakers but absent in Chinese speakers. This contradicts previous assertions that cross-linguistic differences in neural-acoustic synchronization only appear at the supra-syllabic (but not syllabic) level (Blanco-Elorrieta et al, 2020;Ding et al, 2016;Rimmele et al, 2023), simply because the latter is similar across languages (Ding et al, 2017).…”
Section: Multilevel Brain-to-speech Synchronization Is Optimized For ...contrasting
confidence: 99%
“…A growing number of brain imaging studies suggest that speech is processed at multiple temporal windows operated by a set of neuronal oscillators whose frequencies are tuned to relevant features of the acoustic-linguistic signal (Ding et al, 2016; Ghitza, 2011; Gross et al, 2013; Hyafil et al, 2015; Kösem & Van Wassenhove, 2017; Poeppel, 2003; Rimmele et al, 2023; Teng et al, 2017). The oscillations associated with speech are spectrally distributed in the gamma (> 30 Hz), theta (4 - 8 Hz), and delta (1- 3 Hz) frequency bands of the EEG, roughly corresponding with the time spans of phonemic, syllabic, and supra-syllabic units.…”
Section: Introductionmentioning
confidence: 99%
“…One plausible hypothesis maps the hierarchies of "what" and "when" predictions onto neural hierarchies, such that the interactive effects of "what" predictions for single chunks (e.g., syllable) and "when" predictions for faster time scales (e.g., syllable onsets) are subserved by hierarchically lower cortical regions involved in syllable processing, such as the STG (Oganian and Chang, 2019). Conversely, interactions between "what" predictions for longer segments (e.g., words) and slower "when" predictions (e.g., word onsets) may instead be subserved by hierarchically higher cortical regions involved in supra-syllabic word processing, such as frontal regions (Rimmele et al, 2023). Yet, interactions between "what" and "when" predictions may not need to occur within the sensory processing hierarchy, and instead might rest on sensory-independent, generic mechanisms regardless of their hierarchical level in terms of speech contents and timing.…”
Section: Introductionmentioning
confidence: 99%