2017
DOI: 10.1109/tip.2016.2627806
|View full text |Cite
|
Sign up to set email alerts
|

Flexible Multi-View Dimensionality Co-Reduction

Abstract: Dimensionality reduction aims to map the high-dimensional inputs onto a low-dimensional subspace, in which the similar points are close to each other and vice versa. In this paper, we focus on unsupervised dimensionality reduction for the data with multiple views, and propose a novel method, called Multi-view Dimensionality co-Reduction. Our method flexibly exploits the complementarity of multiple views during the dimensionality reduction and respects the similarity relationships between data points across the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 126 publications
(32 citation statements)
references
References 38 publications
(44 reference statements)
0
32
0
Order By: Relevance
“…The number of RMS features is much larger than that of subjects, especially for dFC; and more importantly, many features may be irrelevant to the classification task. Directly training a machine‐learning model on high‐dimensional small sample data tends to yield poor generalization performance because of the overfitting phenomenon [Chen et al, , Zhang et al, ]. In addition, it also makes interpretation of the results quite difficult.…”
Section: Methodsmentioning
confidence: 99%
“…The number of RMS features is much larger than that of subjects, especially for dFC; and more importantly, many features may be irrelevant to the classification task. Directly training a machine‐learning model on high‐dimensional small sample data tends to yield poor generalization performance because of the overfitting phenomenon [Chen et al, , Zhang et al, ]. In addition, it also makes interpretation of the results quite difficult.…”
Section: Methodsmentioning
confidence: 99%
“…To explore nonlinear correlations, DCCA [13] extends CCA using deep neural networks, deep CCA [6] further extends CCA with deep neural networks. Different from CCA, Multi-view Dimensionality co-reduction (MDcR) [14] applies the kernel matching to regularize the dependence across multiple views. Inspired by deep learning, semi-nonnegative matrix factorization is utilized to find a common representation including consistent information of multiple views [15].…”
Section: Related Workmentioning
confidence: 99%
“…Automatically approximating samples in high-dimensional ambient space by a union of low-dimensional linear subspaces is considered to be a crucial task in computer vision [9], [23], [24], [25], [26]. In this section, we review the related contributions in the following three aspects, i.e., self-representation calculation, estimating the number of clusters and hyper-graph clustering.…”
Section: Related Workmentioning
confidence: 99%