2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00944
|View full text |Cite
|
Sign up to set email alerts
|

Semantic-Aware Auto-Encoders for Self-supervised Representation Learning

Abstract: The resurgence of unsupervised learning can be attributed to the remarkable progress of self-supervised learning, which includes generative (G) and discriminative (D) models. In computer vision, the mainstream self-supervised learning algorithms are D models. However, designing a D model could be over-complicated; also, some studies hinted that a D model might not be as general and interpretable as a G model. In this paper, we switch from D models to G models using the classical auto-encoder (AE). Note that a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 82 publications
0
1
0
Order By: Relevance
“…The most effective strategies to use self-supervision in computer vision include contrastive learning [48], [51], [52], metric learning [53], and generation [54]. In these strategies, people make two independent data augmentations for each image and input them into a learnable and non-learnable network.…”
Section: Dna++: Distillation Via Self-supervised Learningmentioning
confidence: 99%
“…The most effective strategies to use self-supervision in computer vision include contrastive learning [48], [51], [52], metric learning [53], and generation [54]. In these strategies, people make two independent data augmentations for each image and input them into a learnable and non-learnable network.…”
Section: Dna++: Distillation Via Self-supervised Learningmentioning
confidence: 99%