Human actions may be driven endogenously (to produce desired environmental effects) or exogenously (to accommodate to environmental demands). There is a large body of evidence indicating that these two kinds of action are controlled by different neural substrates. However, only little is known about what happens--in functional terms--on these different "routes to action". Ideomotor approaches claim that actions are selected with respect to their perceptual consequences. We report experiments that support the validity of the ideomotor principle and that, at the same time, show that it is subject to a far-reaching constraint: It holds for endogenously driven actions only! Our results suggest that the activity of the two "routes to action" is based on different types of learning: The activity of the system guiding stimulus-based actions is accompanied by stimulus-response (sensorimotor) learning, whereas the activity of the system controlling intention-based actions results in action-effect (ideomotor) learning.
When we move our eyes, we process objects in the visual field with different spatial resolution due to the nonhomogeneity of our visual system. In particular, peripheral objects are only coarsely represented, whereas they are represented with high acuity when foveated. To keep track of visual features of objects across eye movements, these changes in spatial resolution have to be taken into account. Here, we develop and test a new framework proposing a visual feature prediction mechanism based on past experience to deal with changes in spatial resolution accompanying saccadic eye movements. In 3 experiments, we first exposed participants to an altered visual stimulation where, unnoticed by participants, 1 object systematically changed visual features during saccades. Experiments 1 and 2 then demonstrate that feature prediction during peripheral object recognition is biased toward previously associated postsaccadic foveal input and that this effect is particularly associated with making saccades. Moreover, Experiment 3 shows that during visual search, feature prediction is biased toward previously associated presaccadic peripheral input. Together, these findings demonstrate that the visual system uses past experience to predict how peripheral objects will look in the fovea, and what foveal search templates should look like in the periphery. As such, they support our framework based on ideomotor theory and shed new light on the mystery of why we are most of the time unaware of acuity limitations in the periphery and of our ability to locate relevant objects in the periphery.
Human actions may be carried out in response to exogenous stimuli (stimulus based) or they may be selected endogenously on the basis of the agent's intentions (intention based). We studied the functional differences between these two types of action during action-effect (ideomotor) learning. Participants underwent an acquisition phase, in which each key-press (left/right) triggered a specific tone (low pitch/high pitch) either in a stimulus-based or in an intention-based action mode. Consistent with previous findings, we demonstrate that auditory action effects gain the ability to prime their associated responses in a later test phase only if the actions were selected endogenously during acquisition phase. Furthermore, we show that this difference in ideomotor learning is not due to different attentional demands for stimulus-based and intention-based actions. Our results suggest that ideomotor learning depends on whether or not the action is selected in the intention-based action mode, whereas the amount of attention devoted to the action-effect is less important.
According to ideomotor theory, action-effect associations are crucial for voluntary action control. Recently, a number of studies started to investigate the conditions that mediate the acquisition and application of action-effect associations by comparing actions carried out in response to exogenous stimuli (stimulus-based) with actions selected endogenously (intention-based). There is evidence that the acquisition and/or application of action-effect associations is boosted when acting in an intention-based action mode. For instance, bidirectional action-effect associations were diagnosed in a forced choice test phase if participants previously experienced action-effect couplings in an intention-based but not in a stimulus-based action mode. The present study aims at investigating effects of the action mode on action-effect associations in more detail. In a series of experiments, we compared the strength and durability of short-term action-effect associations (binding) immediately following intention- as well as stimulus-based actions. Moreover, long-term action-effect associations (learning) were assessed in a subsequent test phase. Our results show short-term action-effect associations of equal strength and durability for both action modes. However, replicating previous results, long-term associations were observed only following intention-based actions. These findings indicate that the effect of the action mode on long-term associations cannot merely be a result of accumulated short-term action-effect bindings. Instead, only those episodic bindings are selectively perpetuated and retrieved that integrate action-relevant aspects of the processing event, i.e., in case of intention-based actions, the link between action and ensuing effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.