Neuroplasticity may play a critical role in developing robust, naturally controlled neuroprostheses. This learning, however, is sensitive to system changes such as the neural activity used for control. The ultimate utility of neuroplasticity in real-world neuroprostheses is thus unclear. Adaptive decoding methods hold promise for improving neuroprosthetic performance in nonstationary systems. Here, we explore the use of decoder adaptation to shape neuroplasticity in two scenarios relevant for real-world neuroprostheses: nonstationary recordings of neural activity and changes in control context. Nonhuman primates learned to control a cursor to perform a reaching task using semistationary neural activity in two contexts: with and without simultaneous arm movements. Decoder adaptation was used to improve initial performance and compensate for changes in neural recordings. We show that beneficial neuroplasticity can occur alongside decoder adaptation, yielding performance improvements, skill retention, and resistance to interference from native motor networks. These results highlight the utility of neuroplasticity for real-world neuroprostheses.
Closed-loop decoder adaptation (CLDA) shows great promise to improve closed-loop brain-machine interface (BMI) performance. Developing adaptation algorithms capable of rapidly improving performance, independent of initial performance, may be crucial for clinical applications where patients have limited movement and sensory abilities due to motor deficits. Given the subject-decoder interactions inherent in closed-loop BMIs, the decoder adaptation time-scale may be of particular importance when initial performance is limited. Here, we present SmoothBatch, a CLDA algorithm which updates decoder parameters on a 1-2 min time-scale using an exponentially weighted sliding average. The algorithm was experimentally tested with one nonhuman primate performing a center-out reaching BMI task. SmoothBatch was seeded four ways with varying offline decoding power: 1) visual observation of a cursor ( n = 20), 2) ipsilateral arm movements ( n = 8), 3) baseline neural activity ( n = 17), and 4) arbitrary weights ( n = 11). SmoothBatch rapidly improved performance regardless of seeding, with performance improvements from 0.018 ±0.133 successes/min to > 8 successes/min within 13.1 ±5.5 min ( n = 56). After decoder adaptation ceased, the subject maintained high performance. Moreover, performance improvements were paralleled by SmoothBatch convergence, suggesting that CLDA involves a co-adaptation process between the subject and the decoder.
Brain-machine interfaces (BMI) create novel sensorimotor pathways for action. Much as the sensorimotor apparatus shapes natural motor control, the BMI pathway characteristics may also influence neuroprosthetic control. Here, we explore the influence of control and feedback rates, where control rate indicates how often motor commands are sent from the brain to the prosthetic, and feedback rate indicates how often visual feedback of the prosthetic is provided to the subject. We developed a new BMI that allows arbitrarily fast control and feedback rates, and used it to dissociate the effects of each rate in two monkeys. Increasing the control rate significantly improved control even when feedback rate was unchanged. Increasing the feedback rate further facilitated control. We also show that our high-rate BMI significantly outperformed state-of-the-art methods due to higher control and feedback rates, combined with a different point process mathematical encoding model. Our BMI paradigm can dissect the contribution of different elements in the sensorimotor pathway, providing a unique tool for studying neuroprosthetic control mechanisms.
Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms.
During the process of skill learning, synaptic connections in our brains are modified to form motor memories of learned sensorimotor acts. The more plastic the adult brain is, the easier it is to learn new skills or adapt to neurological injury. However, if the brain is too plastic and the pattern of synaptic connectivity is constantly changing, new memories will overwrite old memories, and learning becomes unstable. This trade-off is known as the stability-plasticity dilemma. Here a theory of sensorimotor learning and memory is developed whereby synaptic strengths are perpetually fluctuating without causing instability in motor memory recall, as long as the underlying neural networks are sufficiently noisy and massively redundant. The theory implies two distinct stages of learning-preasymptotic and postasymptotic-because once the error drops to a level comparable to that of the noiseinduced error, further error reduction requires altered network dynamics. A key behavioral prediction derived from this analysis is tested in a visuomotor adaptation experiment, and the resultant learning curves are modeled with a nonstationary neural network. Next, the theory is used to model two-photon microscopy data that show, in animals, high rates of dendritic spine turnover, even in the absence of overt behavioral learning. Finally, the theory predicts enhanced task selectivity in the responses of individual motor cortical neurons as the level of task expertise increases. From these considerations, a unique interpretation of sensorimotor memory is proposed-memories are defined not by fixed patterns of synaptic weights but, rather, by nonstationary synaptic patterns that fluctuate coherently.hyperplastic | neural tuning S ensorimotor skill learning, like other types of learning, occurs through the general mechanism of experience-dependent synaptic plasticity (1, 2). As we learn a new skill (such as a tennis stroke) through extensive practice, synapses in our brain are modified to form a lasting motor memory of that skill. However, if synapses are overly pliable and in a state of perpetual flux, memories may not stabilize properly as new learning can overwrite previous learning. Thus, for any distributed learning system, there is inherent tension between the competing requirements of stability and plasticity (3): Synapses must be sufficiently plastic to support the formation of new memories, while changing in a manner that preserves the traces of old memories. The specific learning mechanisms by which these contradictory constraints are simultaneously fulfilled are one of neuroscience's great mysteries.The inescapability of the stability-plasticity dilemma, as faced by any distributed learning system, is shown in the cartoon neural network in Fig. 1A. Suppose that the input pattern of [0.6, 0.4] must be transformed into the activation pattern [0.5, 0.7] at the output layer. Given the initial connectivity of the network, the input transforms to the incorrect output [0.8, 0.2]. Through practice and a learning mechanism, the weig...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.