2023
DOI: 10.1109/tpami.2022.3216454
|View full text |Cite
|
Sign up to set email alerts
|

Learning Representation for Clustering Via Prototype Scattering and Positive Sampling

Abstract: Existing deep clustering methods rely on either contrastive or non-contrastive representation learning for downstream clustering task. Contrastive-based methods thanks to negative pairs learn uniform representations for clustering, in which negative pairs, however, may inevitably lead to the class collision issue and consequently compromise the clustering performance. Non-contrastive-based methods, on the other hand, avoid class collision issue, but the resulting non-uniform representations may cause the colla… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 35 publications
0
29
0
Order By: Relevance
“…Based on the assumption that neighboring samples around one view are truly positive with respect to its augmented view [12], we propose diffused sampling to spread the current features to the domain to generate diffused features. The key goal of DSA is to generate the diffused features.…”
Section: A Diffused Sampling Alignmentmentioning
confidence: 99%
See 3 more Smart Citations
“…Based on the assumption that neighboring samples around one view are truly positive with respect to its augmented view [12], we propose diffused sampling to spread the current features to the domain to generate diffused features. The key goal of DSA is to generate the diffused features.…”
Section: A Diffused Sampling Alignmentmentioning
confidence: 99%
“…The key goal of DSA is to generate the diffused features. Specifically, following [12], we achieve the generation of diffused features by introducing a Gaussian distribution, which can be expressed as follows,…”
Section: A Diffused Sampling Alignmentmentioning
confidence: 99%
See 2 more Smart Citations
“…It minimizes the distance between feature representations to learn richer information from the teacher model compared to softened labels [49]. Further, contrastive representation distillation is proposed to exploit the structural characteristics among different samples by harnessing the discriminative representations, thereby enhancing differentiation within data [8,19,45]. This work distills knowledge from the intermediate-view images to enhance the sparseview CT reconstruction.…”
Section: Knowledge Distillationmentioning
confidence: 99%