2021
DOI: 10.3390/brainsci11010049
|View full text |Cite
|
Sign up to set email alerts
|

Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit

Abstract: The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
28
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(31 citation statements)
references
References 112 publications
1
28
2
Order By: Relevance
“…Since we have not tested audio-visual speech understanding in the current experiment we cannot, however, compare the outcomes of the two multisensory speech contexts directly. This idea is nonetheless worth further investigation, as synchronous audio-visual speech information is what we as humans are exposed to from the very early years of development and throughout our lifetimes 9 , as opposed to the audio-tactile speech input that is an utterly novel experience, at least for the tested healthy individuals (we talk about the specific case of the visually impaired further in text). We therefore argue that through a well-designed intervention one can establish in adulthood a new coupling between a given computation and an atypical sensory modality that has never been used for encoding that type of information before.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since we have not tested audio-visual speech understanding in the current experiment we cannot, however, compare the outcomes of the two multisensory speech contexts directly. This idea is nonetheless worth further investigation, as synchronous audio-visual speech information is what we as humans are exposed to from the very early years of development and throughout our lifetimes 9 , as opposed to the audio-tactile speech input that is an utterly novel experience, at least for the tested healthy individuals (we talk about the specific case of the visually impaired further in text). We therefore argue that through a well-designed intervention one can establish in adulthood a new coupling between a given computation and an atypical sensory modality that has never been used for encoding that type of information before.…”
Section: Discussionmentioning
confidence: 99%
“…This contrasts with audio-visual speech, i.e. listening to speech and lip reading/observing gestures at the same time, which is the natural multisensory input to which we are all exposed from early development and throughout the lifetime (see review in 9 ). If our SSD solution and training proves effective in the multisensory context, this will indicate that even in adulthood a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be built.…”
Section: Introductionmentioning
confidence: 99%
“…However, the acoustic attenuation of face masks may have a smaller effect on CHL who have poor high-frequency aided audibility, because the hearing loss restricts the speech bandwidth both with and without a face mask. The loss of visual cues resulting from the use of opaque face masks is likely to affect both CNH and CHL, as both groups benefit significantly from visual speech cues (see reviews by Lalonde and McCreery, 2020;Lalonde and Werner, 2021). However, CNH are likely to be less affected than adults by the loss of these visual cues, because they benefit less from visual speech (Wightman et al, 2006;Ross et al, 2011;Lalonde and Holt, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…More specifically, 5-to 8-year-olds show highly variable results when completing audiovisual speech perception tasks. As suggested by Lalonde and Werner (2021), these results might be explained by extrinsic factors as task complexity, intrinsic factors (i.e., individual developmental skills) or the combination of both (i.e., general psychophysical testing performance).…”
Section: Procedural Modifications For Online Studiesmentioning
confidence: 76%
“…The benefits of audiovisual (AV) speech perception, more specifically, having access to the (Gijbels et al, in press) articulation movements when the auditory speech signal is degraded by noise, have been well studied in adults (see Grant and Bernstein, 2019 for a review). And although we know that infants (Kuhl and Meltzoff, 1984) and children (Lalonde and Werner, 2021 for a review) are sensitive to AV speech information, the size and the presence of an actual AV speech benefit have been debated (Jerger et al, 2009(Jerger et al, , 2014Fort et al, 2012, Ross et al, 2011Lalonde and McCreery, 2020). More specifically, 5-to 8-year-olds show highly variable results when completing audiovisual speech perception tasks.…”
Section: Procedural Modifications For Online Studiesmentioning
confidence: 99%