To determine whether infants can abstract invariant face expressions across different persons (i.e., can form face expression categories), groups of 18-, 24-, and 30-week-old infants (18 boys and 18 girls per group) were habituated by the infant control procedure to photographs of 4 different female faces all wearing an identical expression (happy or surprise). In an immediately following test phase, categorization was inferred from greater generalization of habituation (less recovery of fixation) to 2 new female faces in the familiarized expression than to the same new faces in the altered (novel) expression. To rule out the possibility that generalization at test might be due to failure to discriminate the new persons, control groups of 18 boys and 18 girls at each age saw the same test faces following repeated presentations of only 1 of the 4 habituation faces. The results indicated that not until 30 weeks could infants differentiate happy and surprise expressions on a categorical basis. At 24 weeks they could distinguish a surprise expression following habituation to happy faces, but could not do the reverse. At 18 weeks they could do neither. Overall, the performance of girls was superior to that of boys. The findings are consistent with recent evidence suggesting that the ability to extract invariant configural information relative to the human face does not emerge until about 7 months of age.
To determine whether young infants discriminate photographs of different emotions on an affect-relevant basis or on the basis of isolated features unrelated to emotion, groups of 17-, 23-, and 29-week-olds were habituated to slides of 8 women posing either Toothy Angry, Nontoothy Angry, or Nontoothy Smiling facial expressions and were then shown 2 new women in the familiarized expression and in a novel Toothy Smiling expression. At all 3 ages, recovery to the novel Toothy Smiling faces occurred only after habituation to Nontoothy faces (whether smiling or angry), not after habituation to Toothy Angry faces, indicating that infants had been responsive to nonspecific features of the photographs (presence or absence of bared teeth) rather than to affectively relevant configurations of features. In a second experiment, 2 older age groups (35 and 41 weeks) also proved to be insensitive to affect-related aspects of still faces, though more so for angry than for happy expressions. It is suggested that the young infant's difficulty in extracting emotional information from static stimuli may be attributable to the absence of the critical invariants (dynamic, multimodally specified) that characterize naturalistic expressions of emotion.
The ability of infants to discriminate dynamic, multimodal expressions of emotion was assessed in a series of 5 experiments. In Experiment 1, 48 infants of 4 and 5 months (total N = 96) were habituated to color/sound videotapes of 6 women speaking the same script sadly or happily. Following habituation, 2 new women were presented, each speaking once in the familiarized emotion and once in the novel emotion. Order of stimulus presentation (Sad----Happy, Happy----Sad) was counterbalanced. 5-month-olds were able to discriminate the expressions in both directions, whereas 4-month-olds could discriminate them only in the Sad----Happy direction. In Experiment 2, the ability of 5- and 7-month-olds to discriminate happy and angry expressions was examined using the Happy----Angry stimulus order alone. Only the 7-month-olds could differentiate these stimuli. In Experiment 3, it was shown that 7-month-olds could not distinguish these same Happy----Angry stimuli without vocal accompaniment. The purpose of the fourth experiment was to determine whether the voice played an equally important role in the Sad----Happy discrimination of Experiment 1. Surprisingly, a 5-month group tested without voice readily discriminated these stimuli. Finally, the fifth experiment sought to determine whether an Angry----Happy comparison might also be discriminable without voice. A 7-month group tested in this manner could not discriminate these expressions, while a group tested with voice could. The results indicate that infants can differentiate dynamic, multimodal expressions as early as 5 months, that they distinguish dynamically distinct expressions earlier than more similarly animated expressions, and that they seem to rely more on the voice than the face in making these discriminations.
The ability of infants to discriminate dynamic, multimodal expressions of emotion was assessed in a series of 5 experiments. In Experiment 1, 48 infants of 4 and 5 months (total N = 96) were habituated to color/sound videotapes of 6 women speaking the same script sadly or happily. Following habituation, 2 new women were presented, each speaking once in the familiarized emotion and once in the novel emotion. Order of stimulus presentation (Sad----Happy, Happy----Sad) was counterbalanced. 5-month-olds were able to discriminate the expressions in both directions, whereas 4-month-olds could discriminate them only in the Sad----Happy direction. In Experiment 2, the ability of 5- and 7-month-olds to discriminate happy and angry expressions was examined using the Happy----Angry stimulus order alone. Only the 7-month-olds could differentiate these stimuli. In Experiment 3, it was shown that 7-month-olds could not distinguish these same Happy----Angry stimuli without vocal accompaniment. The purpose of the fourth experiment was to determine whether the voice played an equally important role in the Sad----Happy discrimination of Experiment 1. Surprisingly, a 5-month group tested without voice readily discriminated these stimuli. Finally, the fifth experiment sought to determine whether an Angry----Happy comparison might also be discriminable without voice. A 7-month group tested in this manner could not discriminate these expressions, while a group tested with voice could. The results indicate that infants can differentiate dynamic, multimodal expressions as early as 5 months, that they distinguish dynamically distinct expressions earlier than more similarly animated expressions, and that they seem to rely more on the voice than the face in making these discriminations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.