The relation between alexithymia and both the domain and the facet level of the five-factor model (FFM) of personality was examined in a sample of 101 university students by using the Twenty-Item Toronto Alexithymia Scale (TAS-20; Bagby, Taylor, & Parker, 1994) and the Revised NEO Personality Inventory (Costa & McCrae, 1992c). Consistent with the alexithymia construct, the TAS-20 was positively correlated with Neuroticism (N) and negatively correlated with Extraversion (E) and Openness (O), whereas no significant relations were found with Agreeableness (A) and Conscientiousness (C). Analysis of the lower order traits (i.e., facets) of the FFM revealed that depression for N; positive emotions and assertiveness for E; feelings and actions for O; altruism, tender-mindedness, and modesty for A; and competence for C predicted alexithymia. These results support the uniqueness of the alexithymia construct, which is represented by a cluster of traits across the dimensions and facets of the FFM.
We examine whether spontaneous facial expressions provide observers with sufficient information to distinguish accurately which of seven affective states (six emotional and one neutral) is being experienced by another person. Senders' facial expressions were covertly videotaped as they watched emotionally loaded slides. After each slide, senders nominated the emotion term that best described their affective reaction, and also rated the pleasantness and strength of that reaction. Similar nominations of emotion terms and ratings were later made by receivers who viewed the senders' videotaped facial expressions. The central measure of communication accuracy was the match between senders' and receivers' emotion nominations. Overall accuracy was significantly greater than chance, although it was not impressive in absolute terms. Only happy, angry, and disgusted expressions were recognized at above-chance rates, whereas surprised expressions were recognized at rates that were significantly worse than chance. Overall, female subjects were found to be significantly better senders than were male subjects. Although neither sex was found to be better at receiving facial expressions, female subjects were better receivers of female senders' expressions than of male senders' expressions. Female senders' neutral and surprised expressions were more accurately recognized than were those of male senders. The only sex difference found for the decoding of emotions was a tendency for male subjects to be more accurate at recognizing anger. The results are discussed in relation to those of other studies of emotional communication through facial expression.Since Ekman, Friesen, and Ellsworth's (1972) influential review and reanalysis of experiments on facial expression conducted between 1914 and 1970, there has been a widespread assumption that facial expressions can be used by individuals to infer the emotional experiences of others with a reasonable degree of accuracy. Such an assumption, however, is a generalization that requires closer examination: Under what conditions can facial expressions communicate emotional experiences? We are concerned with the capacity of dynamic and spontaneous facial expressions to communicate the quality of emotional experience. There are surprisingly few studies that bear on this issue; certainly, none of the studies reviewed by Ekman et al. (1972) allows us to draw any conclusion about the communication of specific emotional quality by way of dynamic, spontaneous emotional expressions.Previous studies are limited for this purpose by one or more of six design problems: first, the use of posed rather than spontaneous expressions; second, the use of general affective categories rather than specific emotions; third, assessing accuracy by identification of class of eliciting stimulus rather than of the emotion experienced by the sender; fourth, the use of only two classes of stimuli (or emotional states); fifth, showing receivers more than just facial expressions; and sixth, the use of still rather than dyn...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.