2023
DOI: 10.1101/2023.05.28.542435
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Balancing Memorization and Generalization in RNNs for High Performance Brain-Machine Interfaces

Abstract: Brain-machine interfaces (BMIs) can restore motor function to people with paralysis but are currently limited by the accuracy of real-time decoding algorithms. Recurrent neural networks (RNNs) using modern training techniques have shown promise in accurately predicting movements from neural signals but have yet to be rigorously evaluated against other decoding algorithms in a closed-loop setting. Here we compared RNNs to other neural network architectures in real-time, continuous decoding of finger movements u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 35 publications
1
6
0
Order By: Relevance
“…The performance of the proposed continuous learning algorithm for SNNs was benchmarked against several established methods: SNN trained using backpropagation-through-time (BPTT), an SNN trained using unmodified DeColle learning algorithm [39], a Kalman filter, and a LSTM decoder. The Kalman filter is often used as a standard benchmark for comparing new BMI decoders [20,43], while the LSTM was selected due to recent work demonstrating superior accuracy to other methods [11,44]. Additionally, an SNN trained with unmodified DeColle learning rules described in [39] to demonstrate the necessity of the novel continuous learning algorithm.…”
Section: Batched Learning Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The performance of the proposed continuous learning algorithm for SNNs was benchmarked against several established methods: SNN trained using backpropagation-through-time (BPTT), an SNN trained using unmodified DeColle learning algorithm [39], a Kalman filter, and a LSTM decoder. The Kalman filter is often used as a standard benchmark for comparing new BMI decoders [20,43], while the LSTM was selected due to recent work demonstrating superior accuracy to other methods [11,44]. Additionally, an SNN trained with unmodified DeColle learning rules described in [39] to demonstrate the necessity of the novel continuous learning algorithm.…”
Section: Batched Learning Comparisonmentioning
confidence: 99%
“…The BPTT SNN was included to show how the continuous learning rule affects adaptability. LSTM decoders have shown the ability to surpass the performance of Kalman filters for both offline and online decoding tasks [11,44], so it provides an additional comparison standard for the current state-of-the-art. The Kalman filter demonstrates increased performance with binned spike rates, so a sliding window of 10 time samples was used to bin the neural data before input to the Kalman filter.…”
Section: Neural Variability and Disruptionsmentioning
confidence: 99%
“…In particular, nonlinear BMIs have recently enabled high-performance speech decoding in real-time ( 6 , 37 ). Recently, our group developed both a temporally-convolved feedforward neural network ( tcFNN ) which outperformed the ReFIT Kalman Filter and a recurrent neural network (RNN) which outperformed the tcFNN and a velocity Kalman Filter in a 2-degree-of-freedom (DOF) dexterous finger movement task ( 38 , 39 ).…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, stochasticity (a.k.a. randomness) in ANN training approaches may lead to model instability, or variability in training, even when trained on the same data ( 39 ). Although an increasing number of studies have focused on adapting known ML architectures to neural decoding, there is a need to better understand whether ANNs will generalize and consistently converge to high-performing decoders.…”
Section: Introductionmentioning
confidence: 99%
“…However, after the participant grew more accustomed to the task (final 4 blocks), acquisition time for the 4D decoder dropped by an average of 0.4 s to 1.58 ± 0.06 s (a target acquisition rate of 76 ± 2 targets/min), and 100% of trials were completed. To compare this work with the previous NHP 2-finger task where throughput varied from 1.98 to 3.04 bps with a variety of decoding algorithms 23,25 , throughput for the current method was calculated as 2.64 ± 0.09 bps (see Methods for details). Table 1 summarizes statistics for the 4D decoder/task and 2D decoder/task.…”
mentioning
confidence: 99%