2021
DOI: 10.1109/tmm.2020.3025666
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multi-View Subspace Clustering With Unified and Discriminative Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 114 publications
(33 citation statements)
references
References 38 publications
0
33
0
Order By: Relevance
“…Regarding the optimisation of U ðvÞ C , U ðvÞ I , V C , V ðvÞ I , their Lagrangian functions are constructed, respectively. By applying the Karush Kuhn Tucker (KKT) conditions in each Lagrangian function, the following update rules for each matrix variable can be achieved: Regarding the non-incremental NMF-based algorithms [7,8,11,14,17,18,28] mentioned above, they require the whole dataset when executing. Therefore, they will cost much time learning the common feature of each new incoming multiview instance.…”
Section: Non-incremental Clustering Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Regarding the optimisation of U ðvÞ C , U ðvÞ I , V C , V ðvÞ I , their Lagrangian functions are constructed, respectively. By applying the Karush Kuhn Tucker (KKT) conditions in each Lagrangian function, the following update rules for each matrix variable can be achieved: Regarding the non-incremental NMF-based algorithms [7,8,11,14,17,18,28] mentioned above, they require the whole dataset when executing. Therefore, they will cost much time learning the common feature of each new incoming multiview instance.…”
Section: Non-incremental Clustering Methodsmentioning
confidence: 99%
“…Regarding the non‐incremental NMF‐based algorithms [7, 8, 11, 14, 17, 18, 28] mentioned above, they require the whole dataset when executing. Therefore, they will cost much time learning the common feature of each new incoming multi‐view instance.…”
Section: Related Workmentioning
confidence: 99%
“…Many clustering methods based on deep neural networks, often referred to as deep clustering methods, have been developed. These existing deep clustering methods can mainly be divided into two categories, namely, the single-stage methods [2]- [15] and the two-stage methods [16], [17]. Specifically, the single-stage deep clustering methods generally seek to jointly learn feature representations and cluster assignments in an end-to-end framework.…”
Section: Introductionmentioning
confidence: 99%
“…Ji et al [10] developed the Invariant Information Clustering (IIC) method which learns a clustering function by maximizing the mutual information between the cluster assignments of data pairs. Besides these single-stage methods [2]- [15], some efforts in designing two-stage deep clustering methods have also been made [16], [17]. Van Gansbeke et al [16] proposed the Semantic Clustering by Adopting Nearest neighbors (SCAN) method, which conducts a pretext task of contrastive learning to mine the nearest neighbors in the first stage, and performs a further learning and clustering optimization based on the nearest neighbors in the next stage.…”
Section: Introductionmentioning
confidence: 99%
“…The key of MVC is to find the consistency and complementary information among each view which is described by different aspects, which has been attracted enormous attention. Existing multi-view clustering approaches can be categorized into four categories according to the mechanisms and principles involved, namely, co-training, multikernel clustering, graph clustering and subspace clustering [5]- [9]. Co-training algorithms bootstrap the clustering results of different views by using the prior or learning knowledge from others [10]- [12].…”
Section: Introductionmentioning
confidence: 99%