2017
DOI: 10.1111/cdev.12715
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

Abstract: Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in silence. They were then tested on recognition in the same or the other modality. Both 18-month-old infants and adults learned the lexical mappings when the words were pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

6
38
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(44 citation statements)
references
References 68 publications
6
38
0
Order By: Relevance
“…At 12–13 months of age, infants detect auditory mispronounciations of familiar words in both auditory-only and AV matched conditions, but fail to detect mispronunciations when the AV display is mismatched (Weatherhead & White, 2017). By 18 months of age, if first taught a word–object match with only auditory information, infants will look correctly to the match in both an auditory-only (A-only) and in a visual-only (V-only) test condition (Havy, Foroud, Fais, & Werker, 2017). Infants of this age cannot, however, learn the pairing when given only visual facial information in the training phase, whereas adults can.…”
Section: A Final Consideration: Speech Perception Is Multisensorymentioning
confidence: 99%
“…At 12–13 months of age, infants detect auditory mispronounciations of familiar words in both auditory-only and AV matched conditions, but fail to detect mispronunciations when the AV display is mismatched (Weatherhead & White, 2017). By 18 months of age, if first taught a word–object match with only auditory information, infants will look correctly to the match in both an auditory-only (A-only) and in a visual-only (V-only) test condition (Havy, Foroud, Fais, & Werker, 2017). Infants of this age cannot, however, learn the pairing when given only visual facial information in the training phase, whereas adults can.…”
Section: A Final Consideration: Speech Perception Is Multisensorymentioning
confidence: 99%
“…Even if this ability remains rudimentary in infancy (e.g., see Lewkowicz, 2014, for a review), infants are already able to rely on the visual speech information to facilitate phonetic acquisition (Teinonen, Aslin, Alku, & Csibra, 2008;Ter Schure, Junge, & Boersma, 2016) and word learning processes (Havy, Foroud, Fais, & Werker;Weatherhead & White, 2017). This audiovisual gain is probably due to the fact that visual speech carries articulatory information-provided by the mouth area-that is highly redundant with the corresponding auditory speech signal (Summerfield, 1987).…”
Section: Introductionmentioning
confidence: 99%
“…Through face-to-face interactions, infants simultaneously hear the auditory speech signal and see the accompanying movements of the speaker’s face ( Altvater-Mackensen and Grossmann, 2015 ). In adults, visible speech conveys redundant and complementary information ( Miller and Nicely, 1955 ; Robert-Ribes et al, 1998 ) that reliably enhances auditory phonetic perception (i.e., Samuel and Lieblich, 2014 ) and facilitates lexical recognition ( Brancazio, 2004 ; Barutchu et al, 2008 ; Buchwald et al, 2009 ; Fort et al, 2010 , 2013 ; Havy et al, 2017 ). Here we ask whether young children, who have considerably less experience in watching others’ articulators, can benefit from visible speech as they learn new words.…”
Section: Introductionmentioning
confidence: 99%
“…However, one research project has begun to address this issue. In their research, Havy et al (2017) asked whether 18-month-old English-learning infants were able to learn new lexical mappings in either auditory or visual modality ( Havy et al, 2017 ). The purpose was twofold: firstly to determine whether visible speech alone can be used to guide lexical learning, and secondly whether information from either auditory or visual modalities is available through cross-modal translation of the input.…”
Section: Introductionmentioning
confidence: 99%