Despite the rapid progress and interest in brain-machine interfaces that restore motor function, the performance of prosthetic fingers and limbs has yet to mimic native function. The algorithm that converts brain signals to a control signal for the prosthetic device is one of the limitations in achieving rapid and realistic finger movements. To achieve more realistic finger movements, we developed a shallow feed-forward neural network to decode real-time two-degree-of-freedom finger movements in two adult male rhesus macaques. Using a two-step training method, a recalibrated feedback intention–trained (ReFIT) neural network is introduced to further improve performance. In 7 days of testing across two animals, neural network decoders, with higher-velocity and more natural appearing finger movements, achieved a 36% increase in throughput over the ReFIT Kalman filter, which represents the current standard. The neural network decoders introduced herein demonstrate real-time decoding of continuous movements at a level superior to the current state-of-the-art and could provide a starting point to using neural networks for the development of more naturalistic brain-controlled prostheses.
Modern brain-machine interfaces can return function to people with paralysis, but current hand neural prostheses are unable to reproduce control of individuated finger movements. Here, for the first time, we present a real-time, high-speed, linear brain-machine interface in nonhuman primates that utilizes intracortical neural signals to bridge this gap. We created a novel task that systematically individuates two finger groups, the index finger and the middle-ring-small fingers combined, presenting separate targets for each group. During online brain control, the ReFIT Kalman filter demonstrated the capability of individuating movements of each finger group with high performance, enabling a nonhuman primate to acquire two targets simultaneously at 1.95 targets per second, resulting in an average information throughput of 2.1 bits per second. To understand this result, we performed single unit tuning analyses. Cortical neurons were active for movements of an individual finger group, combined movements of both finger groups, or both. Linear combinations of neural activity representing individual finger group movements predicted the neural activity during combined finger group movements with high accuracy, and vice versa. Hence, a linear model was able to explain how cortical neurons encode information about multiple dimensions of movement simultaneously. Additionally, training ridge regressing decoders with independent component movements was sufficient to predict untrained higher-complexity movements. Our results suggest that linear decoders for brain-machine interfaces may be sufficient to execute high-dimensional tasks with the performance levels required for naturalistic neural prostheses.
Vowels make a strong contribution to speech perception under natural conditions. Vowels are encoded in the auditory nerve primarily through neural synchrony to temporal fine structure and to envelope fluctuations rather than through average discharge rate. Neural synchrony is thought to contribute less to vowel coding in central auditory nuclei, consistent with more limited synchronization to fine structure and the emergence of average-rate coding of envelope fluctuations. However, this hypothesis is largely unexplored, especially in background noise. The present study examined coding mechanisms at the level of the midbrain that support behavioral sensitivity to simple vowel-like sounds using neurophysiological recordings and matched behavioral experiments in the budgerigar. Stimuli were harmonic tone complexes with energy concentrated at one spectral peak, or formant frequency, presented in quiet and in noise. Behavioral thresholds for formant-frequency discrimination decreased with increasing amplitude of stimulus envelope fluctuations, increased in noise, and were similar between budgerigars and humans. Multiunit recordings in awake birds showed that the midbrain encodes vowel-like sounds both through response synchrony to envelope structure and through average rate. Whereas neural discrimination thresholds based on either coding scheme were sufficient to support behavioral thresholds in quiet, only synchrony-based neural thresholds could account for behavioral thresholds in background noise. These results reveal an incomplete transformation to average-rate coding of vowel-like sounds in the midbrain. Model simulations suggest that this transformation emerges due to modulation tuning, which is shared between birds and mammals. Furthermore, the results underscore the behavioral relevance of envelope synchrony in the midbrain for detection of small differences in vowel formant frequency under real-world listening conditions.
Despite the rapid progress and interest in brain-machine interfaces that restore motor function, the performance of prosthetic fingers and limbs has yet to mimic native function. The algorithm that converts brain signals to a control signal for the prosthetic device is one of the limitations in achieving rapid and realistic finger movements. To achieve more realistic finger movements, we developed a shallow feed-forward neural network, loosely inspired by the biological neural pathway, to decode real-time two-degree-of-freedom finger movements. Using a two-step training method, a recalibrated feedback intention-trained (ReFIT) neural network achieved a higher throughput with higher finger velocities and more natural appearing finger movements than the ReFIT Kalman filter, which represents the current standard. The neural network decoders introduced herein are the first to demonstrate real-time decoding of continuous movements at a level superior to the current state-of-the-art and could provide a starting point to using neural networks for the development of more naturalistic brain-controlled prostheses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.