Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.
Whether face gender perception is processed by encoding holistic (whole) or featural (parts) information is a controversial issue. Although neuroimaging studies have identified brain regions related to face gender perception, the temporal dynamics of this process remain under debate. Here, we identified the mechanism and temporal dynamics of face gender perception. We used stereoscopic depth manipulation to create two conditions: the front and behind condition. In the front condition, facial patches were presented stereoscopically in front of the occluder and participants perceived them as disjoint parts (featural cues). In the behind condition, facial patches were presented stereoscopically behind the occluder and were amodally completed and unified in a coherent face (holistic cues). We performed three behavioral experiments and one electroencephalography experiment, and compared the results of the front and behind conditions. We found faster reaction times (RTs) in the behind condition compared with the front, and observed priming effects and aftereffects only in the behind condition. Moreover, the EEG experiment revealed that face gender perception is processed in the relatively late phase of visual recognition (200–285 ms). Our results indicate that holistic information is critical for face gender perception, and that this process occurs with a relatively late latency.
We conducted two experiments to investigate the psychological factors affecting the attractiveness of composite faces. Feminised or juvenilised Japanese faces were created by morphing between average male and female adult faces or between average male (female) adult and boy (girl) faces. In experiment 1, we asked the participants to rank the attractiveness of these faces. The results showed moderately juvenilised faces to be highly attractive. In experiment 2, we analysed the impressions the participants had of the composite faces by the semantic-differential method and determined the factors that largely affected attractiveness. On the basis of the factor scores, we plotted the faces in factor spaces and analysed the locations of attractive faces. We found that most of the attractive juvenilised faces involved impressions corresponding to an augmentation of femininity, characterised by the factors of 'elegance', 'mildness', and 'youthfulness', which the attractive faces potentially had.
Extraction of wrinkles and spots in facial images is useful not only for face recognition, but also facial image synthesis. While many studies have focused on extraction of facial parts, such as eyes, a mouth, or a nose, only a few studies have attempted to extract wrinkles and spots. This paper proposes a novel method for extracting wrinkles and spots based on local analyses of the general shape properties of wrinkles and spots rather than applying global spatial filters as most previous methods do. Since the proposed method can extract wrinkles and spots separately, we can effectively manipulate them for facial image synthesis. We conducted an age evaluation experiment, confirming that effective facial image synthesis can be achieved when we manipulated not only facial parts, but also wrinkles and spots using our proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.