2017 IEEE International Conference on Big Data (Big Data) 2017
DOI: 10.1109/bigdata.2017.8257989
|View full text |Cite
|
Sign up to set email alerts
|

Error-robust multi-view clustering

Abstract: In the era of big data, data may come from multiple sources, known as multi-view data. Multi-view clustering aims at generating better clusters by exploiting complementary and consistent information from multiple views rather than relying on the individual view. Due to inevitable system errors caused by data-captured sensors or others, the data in each view may be erroneous. Various types of errors behave differently and inconsistently in each view. More precisely, error could exhibit as noise and corruptions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…Evaluating the robustness of the proposed RGCN and MVRGCN models against outlier-contaminated datasets, and conducting experiments on various architectures for robust deep autoendoers when using two separate deep autoencoders to capture clean low-rank and sparse components are considered as other directions for the future work. It is worth mentioning that the results for Fox dataset does not outperform state-of-theart methods based on Markov chains e.g., [17]. My intuition is that those results are achievable with training the models for more epochs and trying other architectures for robust deep autoencoders.…”
Section: Discussionmentioning
confidence: 90%
See 1 more Smart Citation
“…Evaluating the robustness of the proposed RGCN and MVRGCN models against outlier-contaminated datasets, and conducting experiments on various architectures for robust deep autoendoers when using two separate deep autoencoders to capture clean low-rank and sparse components are considered as other directions for the future work. It is worth mentioning that the results for Fox dataset does not outperform state-of-theart methods based on Markov chains e.g., [17]. My intuition is that those results are achievable with training the models for more epochs and trying other architectures for robust deep autoencoders.…”
Section: Discussionmentioning
confidence: 90%
“…Improving robustness of machine learning algorithms has received considerable attention recently [14,[17][18][19]. Existing methods for improving robustness of the deep learning models can be roughly classified into four categories.…”
Section: Related Workmentioning
confidence: 99%
“…Consequently, Multi-View Clustering (MVC) approaches have recently arisen to overcome the disadvantages of single-view clustering [16], [17]. These approaches include multi-view clustering based on LRR (MVC-LRR) [17]- [19], multi-view clustering based on robust principal component analysis (MVC-RPCA) [20]- [22] and multi-view clustering based on graphs (MVC-G) [23]- [25].…”
Section: Introductionmentioning
confidence: 99%
“…However, ETLMSC follows a two-step strategy to construct the affinity matrix from the recovered clean tensor which is used to detect the network's structure. In [22], the authors studied the different possible errors in the multi-view features and proposed an error robust multi-view spectral clustering model. In [29], the authors proposed a multi-view subspace clustering model based on the transition probability matrix learning and a nonconvex low-rank tensor approximation instead of a convex nuclear norm.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome this limitation, Xia et al [15] applied LRR for multi-view clustering to learn a low-rank transition probability matrix as the input of the standard Markov chain clustering method. Taking the different types of noise in samples into account, Najafi et al [16] combined the low-rank approximation with error learning to eliminate noise and outliers. The work in [17] used low-rank and sparse constraints for multi-view clustering simultaneously.…”
Section: Introductionmentioning
confidence: 99%