2022
DOI: 10.1109/tkde.2022.3198746
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

Abstract: While self-supervised learning techniques are often used to mine hidden knowledge from unlabeled data via modeling multiple views, it is unclear how to perform effective representation learning in a complex and inconsistent context. To this end, we propose a new multi-view self-supervised learning method, namely consistency and complementarity network (CoCoNet), to comprehensively learn global inter-view consistent and local cross-view complementarity-preserving representations from multiple views. To capture … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 52 publications
0
2
0
Order By: Relevance
“…As one of the most effective self-supervised methods, contrastive learning (CL) aims to learn discriminative representations from unlabelled data (Li et al 2022a). With the principle of closing positive and moving away from negative pairs, CL methods, such as SimCLR (Chen et al 2020), MoCo (He et al 2020), BYOL (Grill et al 2020), MetAug (Li et al 2022b), andBarlow Twins (Zbontar et al 2021), have achieved outstanding success in the field of computer vision (Wu et al 2018;Chen et al 2020;Qiang et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…As one of the most effective self-supervised methods, contrastive learning (CL) aims to learn discriminative representations from unlabelled data (Li et al 2022a). With the principle of closing positive and moving away from negative pairs, CL methods, such as SimCLR (Chen et al 2020), MoCo (He et al 2020), BYOL (Grill et al 2020), MetAug (Li et al 2022b), andBarlow Twins (Zbontar et al 2021), have achieved outstanding success in the field of computer vision (Wu et al 2018;Chen et al 2020;Qiang et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…A fundamental idea behind self-supervised learning is to learn discriminative representations from the input data without relying on human annotations. Recent advances in visual self-supervised learning [3,4,5,6,7] demonstrate that unsupervised approaches can achieve competitive performance over supervised approaches by introducing sophisticated self-supervised tasks. A representative learning paradigm is contrastive learning [8,1,9,10,11,12,13], which aims to learn invariant information from different views (generated by data augmentations) by performing instance-level contrast, i.e., pulling views of the same sample together while pushing views of different samples away.…”
mentioning
confidence: 99%