Motor behaviors are shaped not only by current sensory signals but also by the history of recent experiences. For instance, repeated movements toward a particular target bias the subsequent movements toward that target direction. This process, called use-dependent plasticity (UDP), is considered a basic and goal-independent way of forming motor memories. Most studies consider movement history as the critical component that leads to UDP (Classen et al., 1998; Verstynen and Sabes, 2011). However, the effects of learning (i.e., improved performance) on UDP during movement repetition have not been investigated. Here, we used transcranial magnetic stimulation in two experiments to assess plasticity changes occurring in the primary motor cortex after individuals repeated reinforced and nonreinforced actions. The first experiment assessed whether learning a skill task modulates UDP. We found that a group that successfully learned the skill task showed greater UDP than a group that did not accumulate learning, but made comparable repeated actions. The second experiment aimed to understand the role of reinforcement learning in UDP while controlling for reward magnitude and action kinematics. We found that providing subjects with a binary reward without visual feedback of the cursor led to increased UDP effects. Subjects in the group that received comparable reward not associated with their actions maintained the previously induced UDP. Our findings illustrate how reinforcing consistent actions strengthens use-dependent memories and provide insight into operant mechanisms that modulate plastic changes in the motor cortex. Performing consistent motor actions induces use-dependent plastic changes in the motor cortex. This plasticity reflects one of the basic forms of human motor learning. Past studies assumed that this form of learning is exclusively affected by repetition of actions. However, here we showed that success-based reinforcement signals could affect the human use-dependent plasticity (UDP) process. Our results indicate that learning augments and interacts with UDP. This effect is important to the understanding of the interplay between the different forms of motor learning and suggests that reinforcement is not only important to learning new behaviors, but can shape our subsequent behavior via its interaction with UDP.
Faster relearning of an external perturbation, savings, offers a behavioral linkage between motor learning and memory. To explain savings effects in reaching adaptation experiments, recent models suggested the existence of multiple learning components, each shows different learning and forgetting properties that may change following initial learning. Nevertheless, the existence of these components in rhythmic movements with other effectors, such as during locomotor adaptation, has not yet been studied. Here, we study savings in locomotor adaptation in two experiments; in the first, subjects adapted to speed perturbations during walking on a split-belt treadmill, briefly adapted to a counter-perturbation and then readapted. In a second experiment, subjects readapted after a prolonged period of washout of initial adaptation. In both experiments we find clear evidence for increased learning rates (savings) during readaptation. We show that the basic error-based multiple timescales linear state space model is not sufficient to explain savings during locomotor adaptation. Instead, we show that locomotor adaptation leads to changes in learning parameters, so that learning rates are faster during readaptation. Interestingly, we find an intersubject correlation between the slow learning component in initial adaptation and the fast learning component in the readaptation phase, suggesting an underlying mechanism for savings. Together, these findings suggest that savings in locomotion and in reaching may share common computational and neuronal mechanisms; both are driven by the slow learning component and are likely to depend on cortical plasticity. computational motor control; locomotor adaptation; motor learning; split-belt
It has been suggested that a feedforward control mechanism drives the adaptation of the spatial and temporal interlimb locomotion variables. However, the internal representation of limb kinetics during split-belt locomotion has not yet been studied. In hand movements, it has been suggested that kinetic and kinematic parameters are controlled by separate neural processes; therefore, it is possible that separate neural processes are responsible for kinetic and kinematic locomotion parameters. In the present study, we assessed the adaptation of the limb kinetics by analyzing the ground reaction forces (GRFs) as well as the center of pressure (COP) during adaptation to speed perturbation, using a split-belt treadmill with an integrated force plate. We found that both the GRF of each leg at initial contact and the COP changed gradually and showed motor aftereffects during early postadaptation, suggesting the use of a feedforward predictive mechanism. However, the GRF of each leg in the single-support period used a feedback control mechanism. It changed rapidly during the adaptation phase and showed no motor aftereffect when the speed perturbation was removed. Finally, we found that the motor adaptation of the GRF and the COP are mediated by a dual-rate process. Our results suggest two important contributions to neural control of locomotion. First, different control mechanisms are responsible for forces at single- and double-support periods, as previously reported for kinematic variables. Second, our results suggest that motor adaptation during split-belt locomotion is mediated by fast and slow adaptation processes.
Motor exploration, a trial-and-error process in search for better motor outcomes, is known to serve a critical role in motor learning. This is particularly relevant during reinforcement learning, where actions leading to a successful outcome are reinforced while unsuccessful actions are avoided. Although early on motor exploration is beneficial to finding the correct solution, maintaining high levels of exploration later in the learning process might be deleterious. Whether and how the level of exploration changes over the course of reinforcement learning, however, remains poorly understood. Here we evaluated temporal changes in motor exploration while healthy participants learned a reinforcement-based motor task. We defined exploration as the magnitude of trial-to-trial change in movements as a function of whether the preceding trial resulted in success or failure. Participants were required to find the optimal finger-pointing direction using binary feedback of success or failure. We found that the magnitude of exploration gradually increased over time when participants were learning the task. Conversely, exploration remained low in participants who were unable to correctly adjust their pointing direction. Interestingly, exploration remained elevated when participants underwent a second training session, which was associated with faster relearning. These results indicate that the motor system may flexibly upregulate the extent of exploration during reinforcement learning as if acquiring a specific strategy to facilitate subsequent learning. Also, our findings showed that exploration affects reinforcement learning and vice versa, indicating an interactive relationship between them. Reinforcement-based tasks could be used as primers to increase exploratory behavior leading to more efficient subsequent learning. NEW & NOTEWORTHY Motor exploration, the ability to search for the correct actions, is critical to learning motor skills. Despite this, whether and how the level of exploration changes over the course of training remains poorly understood. We showed that exploration increased and remained high throughout training of a reinforcement-based motor task. Interestingly, elevated exploration persisted and facilitated subsequent learning. These results suggest that the motor system upregulates exploration as if learning a strategy to facilitate subsequent learning.
The human motor control system gracefully behaves in a dynamic and time varying environment. Here, we explored the predictive capabilities of the motor system in a simple motor task of lifting a series of virtual objects. When a subject lifts an object, she/he uses an expectation of the weight of the object to generate a motor command. All models of motor learning employ learning algorithms that essentially expect the future to be similar to the previously experienced environment. In this study, we asked subjects to lift a series of increasing weights and determined whether they extrapolated from past experience and predicted the next weight in the series even though that weight had never been experienced. The grip force at the beginning of the lifting task is a clean indication of the motor expectation. In contrast to the motor learning literature asserting adaptation by means of expecting a weighted average based on past experience, our results suggest that the motor system is able to predict the subsequent weight that follows a series of increasing weights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.