2021
DOI: 10.48550/arxiv.2107.12329
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AASAE: Augmentation-Augmented Stochastic Autoencoders

Abstract: Recent methods for self-supervised learning can be grouped into two paradigms: contrastive and non-contrastive approaches. Their success can largely be attributed to data augmentation pipelines which generate multiple views of a single input that preserve the underlying semantics. In this work, we introduce augmentationaugmented variational autoencoders (AAVAE), a third approach to self-supervised learning based on autoencoding. We derive AAVAE starting from the conventional variational autoencoder (VAE), by r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Finally, we do not consider methods based on VAEs [1,45], since they have been shown to yield poor performance in the large and medium scale. For instance, as found by [21], a VAE trained offline on CIFAR10 reaches an accuracy of 57.2%, which is lower than any method (except VICReg) trained continually on CIFAR100 with CaSSLe.…”
Section: B Derivation Of Distillation Lossesmentioning
confidence: 82%
“…Finally, we do not consider methods based on VAEs [1,45], since they have been shown to yield poor performance in the large and medium scale. For instance, as found by [21], a VAE trained offline on CIFAR10 reaches an accuracy of 57.2%, which is lower than any method (except VICReg) trained continually on CIFAR100 with CaSSLe.…”
Section: B Derivation Of Distillation Lossesmentioning
confidence: 82%
“…Here we fixed P (R) = P (X ), but better anchor priors can be designed which may further improve performance. Finally, anchoring also presents interesting avenues for theoretical analysis to understand its connection to reconstruction-based representation learning approaches such as (Falcon et al 2021) and (Sinha and Dieng 2021).…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Outside of GANs, variational autoencoders (VAEs) have been adapted to generate more semantically meaningful representations by trading off latent channel capacity and independence constraints with reconstruction accuracy (Higgins et al, 2016), an idea that has also been incorporated into recognition improvements using patch-level bottlenecks (Gupta et al, 2020), which encourage a VAE to focus on useful patterns in images. By incorporating data-augmentation, VAE is also shown to achieve fair discriminative performance (Falcon et al, 2021). Recently, works like MAE (He et al, 2021) and CAE have learned representations by solving masked reconstruction tasks using vision transformers.…”
Section: Related Workmentioning
confidence: 99%