2021
DOI: 10.1101/2021.10.29.466492
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience

Abstract: Integrating data from multiple experiments is common practice in systems neuroscience but it requires inter-experimental variability to be negligible compared to the biological signal of interest. This requirement is rarely fulfilled; systematic changes between experiments can drastically affect the outcome of complex analysis pipelines. Modern machine learning approaches designed to adapt models across multiple data domains offer flexible ways of removing inter-experimental variability where classical statist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 50 publications
0
7
0
Order By: Relevance
“…This method of domain unification is unsupervised. Gonschorek et al (2021) use domain adaptation to align data across experiments of two-photon imaging recordings using an autoencoder model and a domain classifier. The authors successfully align their recording sessions but they do not test efficacy on unseen sessions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This method of domain unification is unsupervised. Gonschorek et al (2021) use domain adaptation to align data across experiments of two-photon imaging recordings using an autoencoder model and a domain classifier. The authors successfully align their recording sessions but they do not test efficacy on unseen sessions.…”
Section: Related Workmentioning
confidence: 99%
“…We compare the ability of SABLE to predict behaviour from sessions of unseen spike data against existing methods and against a variation of our own model. We look at the following existing models: LFADS (Pandarinath et al, 2017) and RAVE+ (Gonschorek et al, 2021). We also compare against our own model where we do not reverse the gradient between the encoder and decoder, which we denote SABLE-noREV.…”
Section: Models For Comparisonmentioning
confidence: 99%
“…Importantly, propose a model which is invariant to the specific neurons used to represent the neural state within training data; in this work we look at unseen sessions and so do not aim to produce a model invariant to new neurons, but one that is able to identify and utilise seen neurons to reconstruct unperturbed trials. Gonschorek et al [2021] and Jude et al [2022] use domain adaptation to align data across recording sessions. In both, the authors use an autoencoder model and a domain classifier.…”
Section: Related Workmentioning
confidence: 99%
“…We do not include ADAN [Farshchian et al, 2019], NoMAD [Karpowicz et al, 2022] or the generative model by Wen et al [2021] as all require at least some training data from a held out session or subject to be effective. We also do not test against Gonschorek et al [2021] or [Jude et al, 2022] as these approaches require many training sessions to be effective in predicting behaviour from an unseen session whereas we aim to do this with just one training session.…”
Section: Comparison Modelsmentioning
confidence: 99%
“…To start, the labeling process can be laborious, especially when labeling complicated skeletons on multiple views. Even with large labeled datasets, trained networks are often unreliable: they output "glitchy" predictions that require further manipulation before downstream analyses (Karashchuk et al, 2021;Monsees et al, 2022), and struggle to generalize to animals and sessions that were not represented in their labeled training set (Gonschorek et al, 2021). Even well-trained networks that achieve low pixel error on a small number of labeled test frames can still produce a sufficient fraction of error frames that hinder downstream scientific tasks.…”
Section: Introductionmentioning
confidence: 99%