2023
DOI: 10.1007/978-3-031-33374-3_18
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive View-Aligned and Feature Augmentation Network for Partially View-Aligned Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…What is to say, we have less information available to learn the alignment matrix, which leads to poorer learning performance (because we only use unpaired samples in the alignment phase). PVC [29], one of the prior works, shows a similar result with a high alignment ratio. Second, a lower alignment rate does not always lead to poorer clustering performance.…”
Section: G Alignment Performancementioning
confidence: 55%
See 3 more Smart Citations
“…What is to say, we have less information available to learn the alignment matrix, which leads to poorer learning performance (because we only use unpaired samples in the alignment phase). PVC [29], one of the prior works, shows a similar result with a high alignment ratio. Second, a lower alignment rate does not always lead to poorer clustering performance.…”
Section: G Alignment Performancementioning
confidence: 55%
“…• Partially view-aligned clustering (PVC) [29]: This method uses a differentiable surrogate of the Hungarian algorithm to learn the correspondence of the unaligned data in a latent space learned by a neural network • Multi-view Contrastive Learning with Noise-robust Loss (MvCLN) [30]: This method proposes a novel noise-robust contrastive loss to establish the view correspondence using contrastive learning 3) Multi-View Clustering Algorithm on Fully Unpaired Data:…”
Section: Compared Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…They usually consist of 3 steps: 1) measuring the consistency of views; 2) identifying outliers as instances with significant inconsistencies across views; 3) removing outliers to construct a clean dataset. Recently, some multi-view learning methods (Huang et al 2020;Zhang et al 2023d) dedicate to learning alignment relations for the original data and constructing new data instances accordingly. For example, the text view in Fig.…”
Section: Introductionmentioning
confidence: 99%