2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01614
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Visual Representation Learning by Online Constrained K-Means

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…To eliminate the cost of labeling, self-supervised learning is developed to obtain pre-trained model from unlabeled data. Many pretext tasks are proposed for effective learning, e.g., instance discrimination that considers each instance as an individual class and optimizes random augmentations from the same instance [6,18], cluster discrimination that explores relationship between different instances [5,32] and masked image modeling that leverages information within each image [16]. Moreover, [43] demonstrates that selfsupervised pre-training improves supervised pre-training with strong data augmentations.…”
Section: Visual Pre-trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…To eliminate the cost of labeling, self-supervised learning is developed to obtain pre-trained model from unlabeled data. Many pretext tasks are proposed for effective learning, e.g., instance discrimination that considers each instance as an individual class and optimizes random augmentations from the same instance [6,18], cluster discrimination that explores relationship between different instances [5,32] and masked image modeling that leverages information within each image [16]. Moreover, [43] demonstrates that selfsupervised pre-training improves supervised pre-training with strong data augmentations.…”
Section: Visual Pre-trainingmentioning
confidence: 99%
“…Consequently, a lot of research efforts have been devoted to exploring unsupervised pre-training, which can eliminate the labeling cost for a vast amount of instances. Many effective self-supervised methods have been developed, e.g., instance discrimination [7,18], cluster discrimination [5,32], and masked image modeling [17]. Compared to the supervised counterparts, fine-tuning pre-trained models from these unsupervised methods can achieve comparable or even better performance on downstream tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised representation learning has the potential to overcome data scarcity and lack of availability by revealing previously hidden patterns and clusters. Clustering-based methods for this purpose may use either k-means [37]- [39] or Gaussian mixture models [40] to classify images with shared visual characteristics. The clusters may be used to create a group of visual prototypes that can serve as a feature representation for the subsequent tasks.…”
Section: Unsupervised Learningmentioning
confidence: 99%
“…Challenge 2: How to avoid "training collapse" in unsupervised dynamics disentanglement? Despite the great success in unsupervised representation learning [12,13,14], it remains a challenge to disentangle the controllable and noncontrollable dynamic patterns in non-stationary visual scenes. One potential solution is to employ modular structures that learn different dynamics in separate branches.…”
Section: Key Challengesmentioning
confidence: 99%