2023
DOI: 10.1109/access.2023.3236087
|View full text |Cite
|
Sign up to set email alerts
|

DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning

Abstract: Self-supervised learning (SSL) has gained remarkable success, for which contrastive learning (CL) plays a key role. However, the recent development of new non-CL frameworks has achieved comparable or better performance, prompting researchers to enhance these frameworks further. Assimilating CL into non-CL frameworks has been thought to be beneficial, but empirical evidence indicates no visible improvements. In view of that, this paper proposes a strategy of performing CL along the dimensional direction instead… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…23 Contrastive learning is an important method in self-supervised learning that learns feature representations by comparing the similarity between samples. 24 Specifically, contrastive learning requires pairing different views or transformations of the same sample as positive pairs and views or transformations from different samples as negative pairs. By maximizing the similarity of positive pairs and minimizing the similarity of negative pairs, the model can learn feature representations with good discriminative ability.…”
Section: Self-supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…23 Contrastive learning is an important method in self-supervised learning that learns feature representations by comparing the similarity between samples. 24 Specifically, contrastive learning requires pairing different views or transformations of the same sample as positive pairs and views or transformations from different samples as negative pairs. By maximizing the similarity of positive pairs and minimizing the similarity of negative pairs, the model can learn feature representations with good discriminative ability.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…Contrastive learning is an important method in self-supervised learning that learns feature representations by comparing the similarity between samples 24 . Specifically, contrastive learning requires pairing different views or transformations of the same sample as positive pairs and views or transformations from different samples as negative pairs.…”
Section: Relation Workmentioning
confidence: 99%