2020
DOI: 10.1126/sciadv.aba7830
|View full text |Cite
|
Sign up to set email alerts
|

Speech perception at birth: The brain encodes fast and slow temporal information

Abstract: Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of sp… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 46 publications
1
37
0
Order By: Relevance
“…Similarly, the discrimination response was present in both temporal and prefrontal regions, but with no statistically significant different in response size across regions or hemispheres. This finding is consistent with the results of other studies measuring speech discrimination using a combination of channels from temporal and prefrontal areas 10 , 39 , 40 , although others have suggested that prefrontal areas best reflect speech sound discrimination 9 , 41 .…”
Section: Discussionsupporting
confidence: 92%
“…Similarly, the discrimination response was present in both temporal and prefrontal regions, but with no statistically significant different in response size across regions or hemispheres. This finding is consistent with the results of other studies measuring speech discrimination using a combination of channels from temporal and prefrontal areas 10 , 39 , 40 , although others have suggested that prefrontal areas best reflect speech sound discrimination 9 , 41 .…”
Section: Discussionsupporting
confidence: 92%
“…Finally, multi-modality recordings (e.g., EEG with NIRS) are on the rise 359 361 and in the next five years will provide a critical cross-modal comparison of the physiological processes underlying infants’ perception and cognition. Indeed, NIRS is uniquely poised to allow for these kinds of multimodal recordings as it does not have the same constraints as the MR environment.…”
Section: Functional Applications In Neurodevelopment and Cognitionmentioning
confidence: 99%
“…Results from newborns and young infants have also brought new insights in this field. For example, combining hemodynamic (near-infrared spectroscopy) and EEG recordings to measure brain responses to syllables differing in consonants, Cabrera and Gervain (2020) showed that infants (9-10 months old) detect consonant changes on the basis of envelope cues (without the temporal fine structure) and they can even do so on the basis of the slow temporal variation alone (AM <8 Hz). These results are consistent with behavioral data obtained with older infants and adults for whom the slowest envelope cues are also sufficient to detect consonant changes in silence (Drullman, 1995; Shannon et al, 1995; Cabrera et al, 2015; Cabrera & Werner, 2017).…”
Section: Discussionmentioning
confidence: 99%