Sensorimotor adaptation is driven by sensory prediction errors, the difference between the predicted and actual feedback. When the position of the feedback is made uncertain, motor adaptation is attenuated. This effect, in the context of optimal sensory integration models, has been attributed to the motor system discounting noisy feedback, and thus reducing the learning rate. In its simplest form, optimal integration predicts that uncertainty would result in reduced learning for all error sizes. However, these predictions remain untested since manipulations of error size in standard visuomotor tasks introduce confounds in the degree to which performance is influenced by other learning processes such as strategy use. Here, we used a novel visuomotor task that isolates the contribution of implicit adaptation, independent of error size. In two experiments, we varied feedback uncertainty and error size in a factorial manner. At odds with the basic predictions derived from the optimal integration theory, the results show that uncertainty attenuated learning only when the error size was small but had no effect when the error size was large. We discuss possible mechanisms that may account for this interaction, considering how uncertainty may interact with the relevance assigned to the error signal, or how the output of the adaptation system in terms of recalibrating the sensorimotor map may be modified by uncertainty.
Sensorimotor adaptation is driven by sensory prediction errors, the difference between the predicted and actual feedback. When the position of the feedback is made uncertain, adaptation is attenuated. This effect, in the context of optimal sensory integration models, has been attributed to a weakening of the error signal driving adaptation. Here we consider an alternative hypothesis, namely that uncertainty alters the perceived location of the feedback. We present two visuomotor adaptation experiments to compare these hypotheses, varying the size and uncertainty of a visual error signal. Uncertainty attenuated learning when the error size was small but had no effect when the error size was large. This pattern of results favors the hypothesis that uncertainty does not impact the strength of the error signal, but rather, leads to mis-localization of the error. We formalize these ideas to offer a novel perspective on the effect of visual uncertainty on implicit sensorimotor adaptation.SIGNIFICANCE STATEMENTCurrent models of sensorimotor adaptation assume that the rate of learning will be related to properties of the error signal (e.g., size, consistency, relevance). Recent evidence has challenged this view, pointing to a rigid, modular system, one that automatically recalibrates the sensorimotor map in response to movement errors, with minimal constraint. In light of these developments, this study revisits the influence of feedback uncertainty on sensorimotor adaptation. Adaptation was attenuated in response to a noisy feedback signal, but the effect was only manifest for small errors and not for large errors. This interaction suggests that uncertainty does not weaken the error signal. Rather, it may influence the perceived location of the feedback and thus the change in the sensorimotor map induced by that error. These ideas are formalized to show how the motor system remains exquisitely calibrated, even if adaptation is largely insensitive to the statistics of error signals.
Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.Accurately registering the locations of objects is a critical visual function. Most other perceptual functions including pattern and object recognition, as well as visually guided behavior, hinge on first localizing object positions. Position perception is generally assumed to be dictated by retinotopic location, and that may explain a lot of the variance in perceived position. However, perceived position can be biased due to various external factors, such as overt attention [1], motion [2] and saccadic eye movements [3]. The impact of these factors can be significant, especially considering the spatial scale at which object recognition and visually guided action happen. A 0.5-degree shift in the location of a pedestrian or car crossing a freeway could result in a catastrophic collision. The scale at which perception and action needs to operate is often very fine, and many factors bias perceived position at a scale that is behaviorally relevant.In the absence of these external factors, perceived position is often assumed to be uniformly dictated by retinotopic position. However, a recent study challenges this belief and demonstrates that people mislocalize objects idiosyncratically and consistently even without apparent change in the environment [4]. The unique biases in object locations were shown to be stable across time when tested after weeks or months, indicating a stable perceptual fingerprint of object location.Why do people perceive idiosyncratically biased object locations in different parts of the visual field and what are the perceptual consequences of it? Here, we test the possibility that variations in spatial resolution across the visual field might cause the spatial distortions in perceived position. Many researchers have shown that visual acuity varies across the visual field [5][6][7]. Because many models of localization depend implicitly or explicitly on the underlying resolution and homogeneity of spatial coding [1,[8][9][10], it is conceivable that the inhomogeneity in visual acuity could...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.