2022
DOI: 10.1007/s11263-022-01639-z
|View full text |Cite
|
Sign up to set email alerts
|

Twin Contrastive Learning for Online Clustering

Abstract: This paper proposes to perform online clustering by conducting twin contrastive learning (TCL) at the instance and cluster level. Specifically, we find that when the data is projected into a feature space with a dimensionality of the target cluster number, the rows and columns of its feature matrix correspond to the instance and cluster representation, respectively. Based on the observation, for a given dataset, the proposed TCL first constructs positive and negative pairs through data augmentations. Thereafte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(19 citation statements)
references
References 50 publications
0
10
1
Order By: Relevance
“…However, our ProPos has introduced further substantial improvements for deep clustering by addressing existing issues. Although CC [8] and TCL [38] achieve the better NMIs on STL-10, we highlight that they use a large image size of 224 × 224 for all datasets, which is not fair to ours.…”
Section: Methods Amicontrasting
confidence: 58%
See 1 more Smart Citation
“…However, our ProPos has introduced further substantial improvements for deep clustering by addressing existing issues. Although CC [8] and TCL [38] achieve the better NMIs on STL-10, we highlight that they use a large image size of 224 × 224 for all datasets, which is not fair to ours.…”
Section: Methods Amicontrasting
confidence: 58%
“…Deep clustering [30], [31], [32], [33], [34], [35], [36], [37], [38] has been significantly advanced by self-supervised representation learning. Most of deep clustering methods are based on contrastive learning by exploiting the discriminative representations, learned from contrastive learning, to assist the downstream clustering tasks [6], [11], [38] or simultaneously optimize representation learning and clustering [7], [9], [10], [39], [40]. For example, SCAN [6] yields the confident pseudo-labels by the pre-trained SimCLR model, and IDFD [9] proposes to perform both instance discrimination and feature decorrelation.…”
Section: Deep Clusteringmentioning
confidence: 99%
“…Several strategies, largely based on clustering techniques, have been developed to avoid substantial job-specific feature engineering because of the dramatically increased size of the datasets. There are many research works and examples of pure advanced clustering methods [ 30 , 31 , 32 , 33 ] to cluster a dataset. Yunfan Li et al [ 30 ] recommend a one-stage online clustering method that directly generates positive and negative instance pairs using data augmentation and afterwards projects the pairs in a feature space.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Moreover, the research in [ 47 ] provided a semantic tagging strategy that makes use of Wikipedia’s knowledge to methodically identify content for social software engineering while also semantically grounding the tagging process. Despite the availability of advanced clustering methods [ 30 , 31 , 32 , 33 ], we aimed to show contribution of clustering technique with basic k-means algorithm to improve additionally the effectiveness of hierarchically organized hidden implicit features of customers and products in building a recommendation model.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation