2019
DOI: 10.1007/978-3-030-20351-1_68
|View full text |Cite
|
Sign up to set email alerts
|

Variational Autoencoder with Truncated Mixture of Gaussians for Functional Connectivity Analysis

Abstract: Resting-state functional connectivity states are often identified as clusters of dynamic connectivity patterns. However, existing clustering approaches do not distinguish major states from rarely occurring minor states and hence are sensitive to noise. To address this issue, we propose to model major states using a non-linear generative process guided by a Gaussian-mixture distribution in a low-dimensional latent space, while separately modeling the connectivity patterns of minor states by a non-informative un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 16 publications
0
17
0
Order By: Relevance
“…deep networks has often been defined by minimizing the reconstruction error [34], such as in auto-encoders (AEs). AEs have shown to be great tools for unsupervised representation learning in a variety of tasks, including image inpainting [43], feature ranking [54], denosing [57], clustering [65], defense against adversarial examples [35], and anomaly detection [48,52]. Although AEs have led to farreaching success for data representation, there are some caveats associated with using reconstruction errors as the sole metric for representation learning: (1) As also argued in [58], it forces to reconstruct all parts of the input, even if they are irrelevant for any given task or are contaminated by noise; (2) It leads to a mechanism that entirely depends on single-point data abstraction, i.e., the AE learns to just reconstruct its input while neglecting other data points present in the dataset.…”
Section: Introductionmentioning
confidence: 99%
“…deep networks has often been defined by minimizing the reconstruction error [34], such as in auto-encoders (AEs). AEs have shown to be great tools for unsupervised representation learning in a variety of tasks, including image inpainting [43], feature ranking [54], denosing [57], clustering [65], defense against adversarial examples [35], and anomaly detection [48,52]. Although AEs have led to farreaching success for data representation, there are some caveats associated with using reconstruction errors as the sole metric for representation learning: (1) As also argued in [58], it forces to reconstruct all parts of the input, even if they are irrelevant for any given task or are contaminated by noise; (2) It leads to a mechanism that entirely depends on single-point data abstraction, i.e., the AE learns to just reconstruct its input while neglecting other data points present in the dataset.…”
Section: Introductionmentioning
confidence: 99%
“…Modeling the temporal dynamics is desirable but non-trivial, since it is highly irregular, complex and variable. To fill this gap, we direct future studies to designing a recurrent neural network (Chen and Hu, 2018;Cui et al, 2019;Shi et al, 2018;Sutskever et al, 2014;Zhao et al, 2019), as an add-on to VAE, to further learn sequence representation, for example, with a self-supervised predictive learning strategy (Kashyap and Keilholz, 2020;Khosla et al, 2019a).…”
Section: Discussionmentioning
confidence: 99%
“…VAEs have recently been popularized for non-rs-fMRI data and gained attention due to their interpretable latent space and ability to variationally learn generative factors that fit a certain prior. Previous work evaluates representation learning with VAEs on rs-fMRI data that has first undergone dimensionality reduction [8]- [11]. These dimensionality reductions may incur overly specific inductive biases and, as a result, limit the expressivity of deep learning methods, especially since neural networks are considered universal function approximators.…”
Section: A Contextmentioning
confidence: 99%