Six studies examined the goal contagion hypothesis, which claims that individuals may automatically adopt and pursue a goal that is implied by another person's behavior. Participants were briefly exposed to behavioral information implying a specific goal and were then given the opportunity to act on the goal in a different way and context. Studies 1-3 established the goal contagion phenomenon by showing that the behavioral consequences of goal contagion possess features of goal directedness: (a) They are affected by goal strength, (b) they have the quality of goal appropriateness, and (c) they are characterized by persistence. Studies 4 -6 show that people do not automatically adopt goals when the observed goal pursuit is conducted in an unacceptable manner, because the goal will then be perceived as unattractive. The results are discussed in the context of recent research on automatic goal pursuits.
The modal view in the cognitive and neural sciences holds that consciousness is necessary for abstract, symbolic, and rule-following computations. Hence, semantic processing of multiple-word expressions, and performing of abstract mathematical computations, are widely believed to require consciousness. We report a series of experiments in which we show that multiple-word verbal expressions can be processed outside conscious awareness and that multistep, effortful arithmetic equations can be solved unconsciously. All experiments used Continuous Flash Suppression to render stimuli invisible for relatively long durations (up to 2,000 ms). Where appropriate, unawareness was verified using both objective and subjective measures. The results show that novel word combinations, in the form of expressions that contain semantic violations, become conscious before expressions that do not contain semantic violations, that the more negative a verbal expression is, the more quickly it becomes conscious, and that subliminal arithmetic equations prime their results. These findings call for a significant update of our view of conscious and unconscious processes.nonconscious processes | automaticity | CFS T he scientific investigation of consciousness and the human unconscious is an ongoing interdisciplinary effort that is central to our understanding of the human mind. The goal is simple: to map the functions performed by nonconscious processes and the functions that are performed consciously, and to understand how these two sets of functions are implemented in the brain. The modal view in cognitive sciences associates consciousness with capabilities that are uniquely (or largely) human. Two prime examples of capabilities of this kind, which are cataloged among the greatest achievements of human culture, are complex language and abstract mathematics. It is not surprising then that the modal view holds that the semantic processing of multiple-word expressions and performing of abstract mathematical computations require consciousness (1-4). In more general terms, sequential rule-following manipulations of abstract symbols are thought to lie outside the capabilities of the human unconscious.This view has received extensive empirical support. Although numerous studies have documented processing of subliminally presented single units of meaning (e.g., a word or a number) (5-8) as well as unconscious retrieval of simple arithmetic facts (9-11), previous research has generally failed to document unconscious performance of functions that require multiple (and sequenced) rule-based operations on more than one abstract unit (12)(13)(14).[Recently, work by Ric and Muller (10) has shown that simple addition (adding two numbers with a sum that is not greater than six) can occur nonconsciously. Although addition of this sort does not require more than one operation, we find these data very encouraging in terms of the challenge that we propose here.]The present study challenges this modal view of consciousness and the unconscious. Specifically...
Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly "read out" from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels.
Physiognomy, the art of reading personality traits from faces, dates back to ancient Greece, and is still very popular. The present studies examine several aspects and consequences of the process of reading traits from faces. Using faces with neutral expressions, it is demonstrated that personality information conveyed in faces changes the interpretation of verbal information. Moreover, it is shown that physiognomic information has a consistent effect on decisions, and creates overconfidence in judgments. It is argued, however, that the process of "reading from faces" is just one side of the coin, the other side of which is "reading into faces." Consistent with the latter, information about personality changes the perception of facial features and, accordingly, the perceived similarity between faces. The implications of both processes and questions regarding their automaticity are discussed.
With a few yet increasing number of exceptions, the cognitive sciences enthusiastically endorsed the idea that there are basic facial expressions of emotions that are created by specific configurations of facial muscles. We review evidence that suggests an inherent role for context in emotion perception. Context does not merely change emotion perception at the edges; it leads to radical categorical changes. The reviewed findings suggest that configurations of facial muscles are inherently ambiguous, and they call for a different approach towards the understanding of facial expressions of emotions. Prices of sticking with the modal view, and advantages of an expanded view, are succinctly reviewed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.