2021
DOI: 10.48550/arxiv.2109.12909
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Compressive Visual Representations

Abstract: Learning effective visual representations that generalize well without human supervision is a fundamental problem in order to apply Machine Learning to a wide variety of tasks. Recently, two families of self-supervised methods, contrastive learning and latent bootstrapping, exemplified by SimCLR and BYOL respectively, have made significant progress. In this work, we hypothesize that adding explicit information compression to these algorithms yields better and more robust representations. We verify this by deve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 44 publications
0
9
0
Order By: Relevance
“…2) in conjunction with a contrastive loss. (Chen et al, 2020b) 71.1 -InfoMin Aug. 73.0 91.1 BYOL 74.3 91.6 RELIC (Mitrovic et al, 2021) 74.8 92.2 SwAV (Caron et al, 2020) 75.3 -NNCLR (Dwibedi et al, 2021) 75.6 92.4 C-BYOL (Lee et al, 2021) 75 negative points with different strengths corresponding to the distance to the anchor point. These have the effect of separating the mean representation of the classes.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…2) in conjunction with a contrastive loss. (Chen et al, 2020b) 71.1 -InfoMin Aug. 73.0 91.1 BYOL 74.3 91.6 RELIC (Mitrovic et al, 2021) 74.8 92.2 SwAV (Caron et al, 2020) 75.3 -NNCLR (Dwibedi et al, 2021) 75.6 92.4 C-BYOL (Lee et al, 2021) 75 negative points with different strengths corresponding to the distance to the anchor point. These have the effect of separating the mean representation of the classes.…”
Section: Methodsmentioning
confidence: 99%
“…Next we evaluate the performance of RELICv2 in a semisupervised setting. We pretrain the representation and leverage a small subset of the available labels in the ImageNet training set to refine the learned representation following the protocol as described in (Chen et al, 2020a;Caron et al, 2020;Dwibedi et al, 2021;Lee et al, 2021) and the supplementary material. Top-1 and top-5 accuracy on the ImageNet validation set is reported in table 2.…”
Section: Semi-supervised Training On Imagenetmentioning
confidence: 99%
See 3 more Smart Citations