Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled.
By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.
Background The common marmoset (Callithrix jacchus) has been proposed as a suitable bridge between rodents and larger primates. They have been used in several types of research including auditory, vocal, visual, pharmacological and genetics studies. However, marmosets have not been used as much for behavioral studies. New Method Here we present data from training 12 adult marmosets for behavioral neuroscience studies. We discuss the husbandry, food preferences, handling, acclimation to laboratory environments and neurosurgical techniques. In this paper, we also present a custom built “scoop” and a monkey chair suitable for training of these animals. Results The animals were trained for three tasks: 4 target center-out reaching task, reaching tasks that involved following robot actions, and touch screen task. All animals learned the center-out reaching task within 1–2 weeks whereas learning reaching tasks following robot actions task took several months of behavioral training where the monkeys learned to associate robot actions with food rewards. Comparison to Existing Method We propose the marmoset as a novel model for behavioral neuroscience research as an alternate for larger primate models. This is due to the ease of handling, quick reproduction, available neuroanatomy, sensorimotor system similar to larger primates and humans, and a lissencephalic brain that can enable implantation of microelectrode arrays relatively easier at various cortical locations compared to larger primates. Conclusion All animals were able to learn behavioral tasks well and we present the marmosets as an alternate model for simple behavioral neuroscience tasks.
Loss of hand function after cervical spinal cord injury severely impairs functional independence. We describe a method for restoring volitional control of hand grasp in one 21 year-old male subject with complete cervical quadriplegia (C5 American Spinal Injury Association Impairment Scale A) using a portable fully implanted brain-computer interface within the home environment. The brain-computer interface consists of subdural surface electrodes placed over the dominant-hand motor cortex and connects to a transmitter implanted subcutaneously below the clavicle, which allows continuous reading of the electrocorticographic activity. Movement-intent was used to trigger functional electrical stimulation of the dominant hand during an initial 29-week laboratory study and subsequently via a mechanical hand orthosis during in-home use. Movement intent information could be decoded consistently throughout the 29-week in-laboratory study with a mean accuracy of 89.0% (range 78–93.3%). Improvements were observed in both the speed and accuracy of various upper extremity tasks, including lifting small objects and transferring objects to specific targets. At home decoding accuracy during open-loop trials reached an accuracy of 91.3% (range 80–98.95%) and an accuracy of 88.3% (range 77.6–95.5%) during closed-loop trials. Importantly, the temporal stability of both the functional outcomes and decoder metrics were not explored in this study. A fully implanted brain-computer interface can be safely used to reliably decode movement intent from motor cortex, allowing for accurate volitional control of hand grasp.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.