2022
DOI: 10.1007/s10827-022-00839-3
|View full text |Cite
|
Sign up to set email alerts
|

Neural manifold analysis of brain circuit dynamics in health and disease

Abstract: Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(26 citation statements)
references
References 129 publications
0
26
0
Order By: Relevance
“…Then, we used a supervised machine learning random forest algorithm 45 to identify feature importance by measuring how much each feature reduces or increases the accuracy across multiple decision trees. Then, for additional visualization, we tested three linear and non-linear dimensionality reduction algorithms 46 : principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), and uniform manifold approximation and projection (UMAP).…”
Section: Methodsmentioning
confidence: 99%
“…Then, we used a supervised machine learning random forest algorithm 45 to identify feature importance by measuring how much each feature reduces or increases the accuracy across multiple decision trees. Then, for additional visualization, we tested three linear and non-linear dimensionality reduction algorithms 46 : principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), and uniform manifold approximation and projection (UMAP).…”
Section: Methodsmentioning
confidence: 99%
“…To produce comparable embeddings 5 , we first trained a model on each worm separately. We then extracted the T Y layer and behaviour predictor layer from the model with the least loss (Worm-1, in this case).…”
Section: Consistency Of Neuronal Manifolds Across Wormsmentioning
confidence: 99%
“…The goal of neuronal manifold learning is to find low-dimensional representations of data that preserve particular data properties. In neuroscience, a broad range of classical dimensionality reduction techniques is being employed, including but not limited to principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP) [5]. More recently, advances in artificial intelligence in general and deep learning methods, in particular, have given rise to a new class of (often non-linear) dimensionality reduction techniques, e.g., based on autoencoder architectures [6,7,8] or contrastive learning frameworks [9].…”
Section: Introductionmentioning
confidence: 99%
“…Individual neurons are embedded in brain networks that collectively organize their high-dimensional neuronal activity patterns into lower-dimensional neuronal manifolds (Churchland et al, 2012; Gallego et al, 2017). The goal of neuronal manifold learning techniques is to find low-dimensional representations of neuronal data that enable insights into the structure of neuronal dynamics and their relation to behavior (Mitchell-Heggs et al, 2023). In neuroscience, classic dimensionality reduction techniques, such as principal component analysis (PCA), Laplacian eigenmaps (LEM), and t-SNE, are complemented by modern deep learning techniques such as CEBRA (Schneider et al, 2023) and BundDLe-Net (Kumar et al, 2023).…”
Section: Multilevel Causal Modeling In C Elegansmentioning
confidence: 99%
“…Due to increasing awareness of the importance of (potentially widely distributed) neuronal activity patterns for behavior, and the difficulty in scaling up hand-crafted models to large-scale neuronal recordings, machine learning methods (also referred to as decoding- or multivariate pattern analysis (MVPA) models) have been developed to uncover relations between complex neuronal activity patterns, cognitvie states, and behaviors (Norman et al, 2006; Mitchell et al, 2008; Pereira et al, 2009). More recently, these models have been complemented by algorithms for learning neuronal manifolds that enable the visualization of high-dimensional neuronal dynamics (Mitchell-Heggs et al, 2023) and their relation to behavior (Schneider et al, 2023; Kumar et al, 2023). However, the ability to visualize and decode behavior from complex neuronal activity patterns does not imply that we have revealed their representational contents (Ritchie et al, 2020).…”
Section: Introductionmentioning
confidence: 99%