2016
DOI: 10.1037/xhp0000208
|View full text |Cite
|
Sign up to set email alerts
|

Newborns’ sensitivity to the visual aspects of infant-directed speech: Evidence from point-line displays of talking faces.

Abstract: The first time a newborn is held, he is attracted by the human's face. A talking face is even more captivating, as it is the first time he or she hears and sees another human talking. Older infants are relatively good at detecting the relationship between images and sounds when someone is addressing to them, but it is unclear whether this ability is dependent on experience or not. Using an intermodal matching procedure, we presented newborns with 2 silent point-line displays representing the same face uttering… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 25 publications
(26 citation statements)
references
References 43 publications
0
26
0
Order By: Relevance
“…Prosody and gestures also overlap in terms of which linguistic functions they are used for. Infants use visual correlates of prosody to segment the speech stream (e.g., Kitamura et al, 2014 ; Guellaï et al, 2016 ), to organize information at the discourse level (e.g., Nicoladis et al, 1999 ; Capone and McGregor, 2004 ; Mathew et al, 2017 ), and to express emotions, intentions, and beliefs ( Sullivan and Lewis, 2003 ; Esteve-Gibert and Prieto, 2014 ; Berman et al, 2016 ; Aureli et al, 2017 ; González-Fuente, 2017 ). Children are sensitive to the fact that visual cues convey relevant linguistic meaning, and experimental evidence shows that gestures are processed earlier and more accurately than prosodic or lexical cues ( Armstrong et al, 2014 ; Esteve-Gibert et al, 2017c ; Hübscher et al, 2017 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Prosody and gestures also overlap in terms of which linguistic functions they are used for. Infants use visual correlates of prosody to segment the speech stream (e.g., Kitamura et al, 2014 ; Guellaï et al, 2016 ), to organize information at the discourse level (e.g., Nicoladis et al, 1999 ; Capone and McGregor, 2004 ; Mathew et al, 2017 ), and to express emotions, intentions, and beliefs ( Sullivan and Lewis, 2003 ; Esteve-Gibert and Prieto, 2014 ; Berman et al, 2016 ; Aureli et al, 2017 ; González-Fuente, 2017 ). Children are sensitive to the fact that visual cues convey relevant linguistic meaning, and experimental evidence shows that gestures are processed earlier and more accurately than prosodic or lexical cues ( Armstrong et al, 2014 ; Esteve-Gibert et al, 2017c ; Hübscher et al, 2017 ).…”
Section: Discussionmentioning
confidence: 99%
“…Results showed that infants reliably detect auditory and visual congruencies in the displays. It seems that this ability emerges early in development as newborns are already able to match a facial display to the corresponding speech stream ( Guellaï et al, 2016 ).…”
Section: Implications Of the Audio–visual Integration For Word Learnimentioning
confidence: 99%
“…By 3 months, infants are capable of categorizing animates and inanimates based on dynamic visual information (Arterberry & Bornstein, ). From birth on, they start associating lip movements with speech information, and facial expressions with emotional tone in voice (Guellai, Streri, Chopin, Rider, & Kitamura, ; Soken & Pick, ), and at 6.5 months, they match affective body movements with voice (Zieber, Kangas, Hock, & Bhatt, ). Hence, much of the existing infant literature suggests that infants show a special interest for information about humans in multiple sensory domains, and that they seem capable of combining such information from early on.…”
Section: Discussionmentioning
confidence: 99%
“…An early language environment presents considerable complexity to an infant sorting out phonetic categories, not only by its variability in speakers and speech registers, but also by the availability of information from multiple sensory modalities. Despite this complexity, new‐borns are able to perceive commonalities in visual and auditory information in continuous speech (Guellaï, Streri, Chopin, Rider & Kitamura, ). While still not a robust effect (Desjardins & Werker, ), classical studies show that infants will look longer at the face articulating a heard vowel compared to a non‐matching vowel (Aldridge, Braga, Walton & Bower, ; Kuhl & Meltzoff, ; Patterson & Werker, ).…”
Section: Introductionmentioning
confidence: 99%