2020
DOI: 10.1101/2020.06.04.134635
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A generative modeling approach for interpreting population-level variability in brain structure

Abstract: Understanding how neural structure varies across individuals is critical for characterizing the effects of disease, learning, and aging on the brain. However, disentangling the different factors that give rise to individual variability is still an outstanding challenge. In this paper, we introduce a deep generative modeling approach to find different modes of variation across many individuals. To do this, we start by training a variational autoencoder on a collection of auto-fluorescence images from a little o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…These three classifiers are used first to ensure that the latent representations are able to capture some relevant information regarding the folding patterns. Then, we can focus our analysis on the features contributing the most to the success of the classification using the generative power of the β − V AE, like in [14]. We travel throughout the latent space modifying only one of these features and observe the generated folding patterns.…”
Section: Learning a Representation Of The Normal Variabilitymentioning
confidence: 99%
See 3 more Smart Citations
“…These three classifiers are used first to ensure that the latent representations are able to capture some relevant information regarding the folding patterns. Then, we can focus our analysis on the features contributing the most to the success of the classification using the generative power of the β − V AE, like in [14]. We travel throughout the latent space modifying only one of these features and observe the generated folding patterns.…”
Section: Learning a Representation Of The Normal Variabilitymentioning
confidence: 99%
“…Our β − V AE was composed of fully convolutional encoder and decoder with symmetrical architectures comprising three convolutional blocks and 2 fully connected layers. We did a gridsearch (L=8-100, β=1-20, ranges are based on previous works [14] and reconstruction abilities), where hyperparameters were chosen according to the classification performances of the Deletion classifier applied to 100 controls and 100 synthetic samples using a 5-fold stratified cross-validation. We selected L=100, β=2 and a learning rate of 2e-4.…”
Section: Datasets and Implementationmentioning
confidence: 99%
See 2 more Smart Citations
“…There is thus a key challenge to learn disentangled representations where variables of interest would be independently and explicitly encoded [Bengio et al, 2013]. There are three main paradigms to learn disentangled representations: unsupervised [Chen et al, 2018, Liu et al, 2020, supervised or semi-supervised ,Zhao et al, 2019, and weakly-supervised [Ruiz et al, 2019]. In the supervised or semi-supervised case, the factors of interest are explicitly labelled in all or in a part of the training set.…”
Section: Introductionmentioning
confidence: 99%