2021
DOI: 10.51628/001c.27358
|View full text |Cite
|
Sign up to set email alerts
|

Representation learning for neural population activity with Neural Data Transformers

Abstract: Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDT’s ability to captu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 27 publications
(46 reference statements)
0
16
0
Order By: Relevance
“…Finally, two baseline methods are quite new and cutting-edge: Neural Data Transformers (NDT) [40] and AutoLFADS [39]. These are expected to provide a high bar by which to judge the performance of other approaches.…”
Section: Neural State Estimationmentioning
confidence: 99%
“…Finally, two baseline methods are quite new and cutting-edge: Neural Data Transformers (NDT) [40] and AutoLFADS [39]. These are expected to provide a high bar by which to judge the performance of other approaches.…”
Section: Neural State Estimationmentioning
confidence: 99%
“…Most experiments used a 6-layer encoder (∼ 3M parameters). NDT2 adds a 2-layer decoder (0.7M parameters) over NDT1 [16]; we ran controls to ensure this extra capacity does not confer benefits to comparison models. To ensure that our models were not bottlenecked by compute or capacity in scaling (Section 4.2), models were trained to convergence with early stopping and progressively larger models were trained until no return was observed.…”
Section: Resultsmentioning
confidence: 99%
“…Traditionally, the few hundred neurons in motor populations have been analyzed directly in terms of their population-level temporal dynamics [1]. NDT1 [16] follows this heritage and directly embeds the full population, with one token per timestep. Yet across contexts, the meaning of individual neurons may change, so operations to learn spatial representations may provide benefits.…”
Section: Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…While these data are usually high-dimensional [ 5 , 6 ], the literature has shown that dimensionality-reduction approaches and generative models can faithfully explain population spike activities [ 7 ] and infer single-trial neural firing rates [ 8 ] with stable low-dimensional latent dynamics. At present, with rapid developments in machine learning and deep learning, the community has proposed several latent variable models (LVMs) with better efficiency and performance to extract the low-dimensional structure [ 9 , 10 , 11 ]. They bring novel insights into neuroscience [ 12 , 13 ] and facilitate the development of brain–computer interfaces [ 14 ].…”
Section: Introductionmentioning
confidence: 99%