Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them. (6), and in the competition between different cognitive representations (7-9). Many studies have explored these tensions, finding that moral decisions can be influenced by priming, highlighting, or framing one factor over another (4-6, 9). Despite this, almost no attention has been devoted to how moral deliberation is played out in the very moment of choice or what effect this might have on the decision process itself. In the current experiments we focused on the temporal dynamics of moral cognition. We hypothesized that tracking the gaze of participants while they decided between two options would provide sufficient knowledge that could be exploited to influence the outcome of the moral deliberation.Our hypothesis is derived from an understanding of human cognition that emphasizes dynamic interaction between cognition and environment through sensorimotor activation, a position supported by converging lines of evidence (10-31). Gaze patterns in humans reflect the course of reasoning during spatial indexing tasks both in adults (10, 11) and in infants (12). Evidence from neural stimulation shows that saccadic programming and perceptual decisions develop together in the monkey brain (15,16). In decision tasks, before asserting their preference for faces or similarly valued snack foods people look more toward the alternative they are going to choose (17,19). For example, the attentional driftdiffusion model (aDDM) proposes a computational mechanism underlying choice whe...
We describe work in progress with the aim of constructing a computational model of emotional learning and processing inspired by neurophysiological ¢ndings. The main brain areas modeled are the amygdala and the orbitofrontal cortex and the interaction between them. We want to show that (1) there exists enough physiological data to suggest the overall architecture of a computational model, (2) emotion plays a clear role in learning the behavior. We review neurophysiological data and present a computational model that is subsequently tested in simulation.In Mowrer's in£uential two-process theory of learning, the acquisition of a learned response was considered to proceed in two steps (Mowrer, 1960(Mowrer, /1973. In the ¢rst step, the stimulus is associated with its emotional consequences. In the second step, this emotional evaluation shapes an association between the stimulus and the response. Mowrer made an important contribution to learning theory when he acknowledged that emotion plays an important role in learning. Another important aspect of the theory is that it suggests a role for emotions that can easily be implemented as a computational model. Different versions of the two-process theory have been implemented as computational models, for example,
Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops.
This paper proposes a new learning set-up in the field of control systems for multifunctional hand prostheses. Two male subjects with a traumatic one-hand amputation performed simultaneous symmetric movements with the healthy and the phantom hand. A data glove on the healthy hand was used as a reference to train the system to perform natural movements. Instead of a physical prosthesis with limited degrees of freedom, a virtual (computer-animated) hand was used as the target tool. Both subjects successfully performed seven different motoric actions with the fingers and wrist. To reduce the training time for the system, a tree-structured, self-organizing, artificial neural network was designed. The training time never exceeded 30 seconds for any of the configurations used, which is three to four times faster than most currently used artificial neural network (ANN) architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.