2020
DOI: 10.31234/osf.io/zncxs
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamic EEG analysis during language comprehension reveals interactive cascades between perceptual processing and sentential expectations

Abstract: Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N=31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(19 citation statements)
references
References 10 publications
2
17
0
Order By: Relevance
“…Moreover, this response did not depend on the listeners' ultimate behavioral categorization of the sound, suggesting that the N1 response tracks the VOT of the stimulus independently of phonological representations. Effects of N1 amplitude reflecting gradient encoding of VOT have also been observed in other studies (Noe & Fischer‐Baum, 2020; Sarrett, McMurray, & Kapnoula, submitted). These results suggest that the auditory N1 serves as an index of early cue encoding for speech sounds.…”
Section: Gradient Representationssupporting
confidence: 79%
See 4 more Smart Citations
“…Moreover, this response did not depend on the listeners' ultimate behavioral categorization of the sound, suggesting that the N1 response tracks the VOT of the stimulus independently of phonological representations. Effects of N1 amplitude reflecting gradient encoding of VOT have also been observed in other studies (Noe & Fischer‐Baum, 2020; Sarrett, McMurray, & Kapnoula, submitted). These results suggest that the auditory N1 serves as an index of early cue encoding for speech sounds.…”
Section: Gradient Representationssupporting
confidence: 79%
“…ECoG data from experiments using nonspeech materials also reveal considerable sensitivity to graded acoustic differences (Nourski et al, 2014, 2015). Thus, although some ECoG studies suggest evidence for responses consistent with categorical perception in pSTG, others suggest that this area maintains sensitivity to subphonemic information, consistent with work using scalp‐recorded EEG (Getz & Toscano, 2019; Noe & Fischer‐Baum, in press; Sarrett et al, submitted; Toscano et al, 2010). It may also be the case that both types of representations are maintained in STG (Yi, Leonard, & Chang, 2019).…”
Section: Gradient Representationssupporting
confidence: 63%
See 3 more Smart Citations