2021
DOI: 10.1101/2021.01.16.426955
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Representation learning for neural population activity with Neural Data Transformers

Abstract: Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDT’s ability to captu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 35 publications
0
22
0
Order By: Relevance
“…When successful, representations learned from populations of neurons can provide insights into how neural circuits work to encode their inputs and drive decisions, and allow for robust and stable decoding of these correlates. Over the last decade, a number of unsupervised learning approaches have been introduced to build representations of neural population activity agnostic to specific labels or downstream decoding tasks (7; 8; 9; 10; 11; 12; 13; 14). Such methods have provided exciting new insights into the stability of neural responses (15), individual differences (11), and remapping of neural responses through learning (16).…”
Section: Introductionmentioning
confidence: 99%
“…When successful, representations learned from populations of neurons can provide insights into how neural circuits work to encode their inputs and drive decisions, and allow for robust and stable decoding of these correlates. Over the last decade, a number of unsupervised learning approaches have been introduced to build representations of neural population activity agnostic to specific labels or downstream decoding tasks (7; 8; 9; 10; 11; 12; 13; 14). Such methods have provided exciting new insights into the stability of neural responses (15), individual differences (11), and remapping of neural responses through learning (16).…”
Section: Introductionmentioning
confidence: 99%
“…Conditioning of 137 neurons (i.e using 45 held-out neurons), we obtained a co-smoothing of 0.331 ± 0.001 (over 5 random seeds). For comparison, Pei et al (2021) reports 0.187 for GPFA (Yu et al, 2009), 0.225 for SLDS (Linderman et al, 2017), 0.329 for Neural Data Transformers (Ye and Pandarinath, 2021) and R 2 = 0.346 for AutoLFADS (LFADS with large scale hyperparameter optimization; Keshtkaran et al, 2021) on the same dataset.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Through combining our approach with this nonlocal view mining strategy, we may be able to build even further invariance into our model's content space. Combining our SSL-backed approach with a sequential encoder [10] or transformer [50] is another exciting line of future research that can be used to model latent structure over longer timescales.…”
Section: Discussionmentioning
confidence: 99%