Loss of hand use is considered by many spinal cord injury survivors to be the most devastating consequence of their injury. Functional electrical stimulation (FES) of forearm and hand muscles has been used to provide basic, voluntary hand grasp to hundreds of human patients. Current approaches typically grade pre-programmed patterns of muscle activation using simple control signals, such as those derived from residual movement or muscle activity. However, the use of such fixed stimulation patterns limits hand function to the few tasks programmed into the controller. In contrast, we are developing a system that uses neural signals recorded from a multi-electrode array implanted in the motor cortex; this system has the potential to provide independent control of multiple muscles over a broad range of functional tasks. Two monkeys were able to use this cortically controlled FES system to control the contraction of four forearm muscles despite temporary limb paralysis. The amount of wrist force the monkeys were able to produce in a one-dimensional force tracking task was significantly increased. Furthermore, the monkeys were able to control the magnitude and time course of the force with sufficient accuracy to track visually displayed force targets at speeds reduced by only one-third to one-half of normal. Although these results were achieved by controlling only four muscles, there is no fundamental reason why the same methods could not be scaled up to control a larger number of muscles. We believe these results provide an important proof of concept that brain-controlled FES prostheses could ultimately be of great benefit to paralyzed patients with injuries in the mid-cervical spinal cord.
Movement representation by the motor cortex (M1) has been a theoretical interest for many years, but in the past several years it has become a more practical question, with the advent of the brainmachine interface. An increasing number of groups have demonstrated the ability to predict a variety of kinematic signals on the basis of M1 recordings and to use these predictions to control the movement of a cursor or robotic limb. We, on the other hand, have undertaken the prediction of myoelectric (EMG) signals recorded from various muscles of the arm and hand during button pressing and prehension movements. We have shown that these signals can be predicted with accuracy that is similar to that of kinematic signals, despite their stochastic nature and greater bandwidth. The predictions were made using a subset of 12 or 16 neural signals selected in the order of each signal's unique, output-related information content. The accuracy of the resultant predictions remained stable through a typical experimental session. Accuracy remained above 80% of its initial level for most muscles even across periods as long as two weeks. We are exploring the use of these predictions as control signals for neuromuscular electrical stimulation in quadriplegic patients.
Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled.
We describe a closed-loop brain-computer interface that re-ranks an image database by iterating between user generated 'interest' scores and computer vision generated visual similarity measures. The interest scores are based on decoding the electroencephalographic (EEG) correlates of target detection, attentional shifts and self-monitoring processes, which result from the user paying attention to target images interspersed in rapid serial visual presentation (RSVP) sequences. The highest scored images are passed to a semi-supervised computer vision system that reorganizes the image database accordingly, using a graph-based representation that captures visual similarity between images. The system can either query the user for more information, by adaptively resampling the database to create additional RSVP sequences, or it can converge to a 'done' state. The done state includes a final ranking of the image database and also a 'guess' of the user's chosen category of interest. We find that the closed-loop system's re-rankings can substantially expedite database searches for target image categories chosen by the subjects. Furthermore, better reorganizations are achieved than by relying on EEG interest rankings alone, or if the system were simply run in an open loop format without adaptive resampling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.