One of the key elements in the design of neuromotor Brain-Machine Interfaces (BMIs) is the neural decoder design. In a biomimetic approach, the decoder is typically trained from concurrent recordings of neural and kinematic or motor imagery data. The non-availability of the latter data imposes a practical problem for patients with lost motor functions. An alternative approach is a biofeedback approach in which subjects are encouraged to 'learn' an arbitrary mapping between neural activity and the external end effector. In this work, we propose an unsupervised decoder initialization scheme to be used in the biofeedback approach that alleviates the need for synchronized kinematic or motor imagery data for decoder training. The approach is totally unsupervised in that the recorded neural activity is directly used as training data for a decoder designed to provide 'desirable' features in the decoded control signal. The decoder is trained from 'spontaneous' neural data when the BMI subject is not engaged in any behavioral task, and we demonstrate its ability to generalize to neural data collected when the subject is in a different behavioral state.
In the study of population coding in neurobiological systems, tracking unit identity may be critical to assess possible changes in the coding properties of neuronal constituents over prolonged periods of time. Ensuring unit stability is even more critical for reliable neural decoding of motor variables in intra-cortically controlled brain-machine interfaces (BMIs). Variability in intrinsic spike patterns, tuning characteristics, and single-unit identity over chronic use is a major challenge to maintaining this stability, requiring frequent daily calibration of neural decoders in BMI sessions by an experienced human operator. Here, we report on a unit-stability tracking algorithm that efficiently and autonomously identifies putative single-units that are stable across many sessions using a relatively short duration recording interval at the start of each session. The algorithm first builds a database of features extracted from units' average spike waveforms and firing patterns across many days of recording. It then uses these features to decide whether spike occurrences on the same channel on one day belong to the same unit recorded on another day or not. We assessed the overall performance of the algorithm for different choices of features and classifiers trained using human expert judgment, and quantified it as a function of accuracy and execution time. Overall, we found a trade-off between accuracy and execution time with increasing data volumes from chronically implanted rhesus macaques, with an average of 12 s processing time per channel at ~90% classification accuracy. Furthermore, 77% of the resulting putative single-units matched those tracked by human experts. These results demonstrate that over the span of a few months of recordings, automated unit tracking can be performed with high accuracy and used to streamline the calibration phase during BMI sessions. Our findings may be useful to the study of population coding during learning, and to improve the reliability of BMI systems and accelerate their deployment in clinical applications.
The development of coordinated reach-to-grasp movement has been well studied in infants and children. However, the role of motor cortex during this development is unclear because it is difficult to study in humans. We took the approach of using a brain-machine interface (BMI) paradigm in rhesus macaques with prior therapeutic amputations to examine the emergence of novel, coordinated reach to grasp. Previous research has shown that after amputation, the cortical area previously involved in the control of the lost limb undergoes reorganization, but prior BMI work has largely relied on finding neurons that already encode specific movement-related information. In this study, we taught macaques to cortically control a robotic arm and hand through operant conditioning, using neurons that were not explicitly reach or grasp related. Over the course of training, stereotypical patterns emerged and stabilized in the cross-covariance between the reaching and grasping velocity profiles, between pairs of neurons involved in controlling reach and grasp, and to a comparable, but lesser, extent between other stable neurons in the network. In fact, we found evidence of this structured coordination between pairs composed of all combinations of neurons decoding reach or grasp and other stable neurons in the network. The degree of and participation in coordination was highly correlated across all pair types. Our approach provides a unique model for studying the development of novel, coordinated reach-to-grasp movement at the behavioral and cortical levels. NEW & NOTEWORTHY Given that motor cortex undergoes reorganization after amputation, our work focuses on training nonhuman primates with chronic amputations to use neurons that are not reach or grasp related to control a robotic arm to reach to grasp through the use of operant conditioning, mimicking early development. We studied the development of a novel, coordinated behavior at the behavioral and cortical level, and the neural plasticity in M1 associated with learning to use a brain-machine interface.
Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.
Studies on neural plasticity associated with brain–machine interface (BMI) exposure have primarily documented changes in single neuron activity, and largely in intact subjects. Here, we demonstrate significant changes in ensemble-level functional connectivity among primary motor cortical (MI) neurons of chronically amputated monkeys exposed to control a multiple-degree-of-freedom robot arm. A multi-electrode array was implanted in M1 contralateral or ipsilateral to the amputation in three animals. Two clusters of stably recorded neurons were arbitrarily assigned to control reach and grasp movements, respectively. With exposure, network density increased in a nearly monotonic fashion in the contralateral monkeys, whereas the ipsilateral monkey pruned the existing network before re-forming a denser connectivity. Excitatory connections among neurons within a cluster were denser, whereas inhibitory connections were denser among neurons across the two clusters. These results indicate that cortical network connectivity can be modified with BMI learning, even among neurons that have been chronically de-efferented and de-afferented due to amputation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.