2014
DOI: 10.1109/tnnls.2013.2294492
|View full text |Cite
|
Sign up to set email alerts
|

Modified Principal Component Analysis: An Integration of Multiple Similarity Subspace Models

Abstract: We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(12 citation statements)
references
References 61 publications
0
12
0
Order By: Relevance
“…Experiments show that this algorithm can better extract useful features in the data set. By improving the PCA algorithm, Fan et al proposed a learning framework based on multiple similarity metric subspaces, namely an improved principal component analysis (MPCA) algorithm 22 . MPCA calculates three similarity matrices according to the similarity measurement method: interactive information matrix, angle information matrix, and Gaussian kernel similarity matrix.…”
Section: Discussionmentioning
confidence: 99%
“…Experiments show that this algorithm can better extract useful features in the data set. By improving the PCA algorithm, Fan et al proposed a learning framework based on multiple similarity metric subspaces, namely an improved principal component analysis (MPCA) algorithm 22 . MPCA calculates three similarity matrices according to the similarity measurement method: interactive information matrix, angle information matrix, and Gaussian kernel similarity matrix.…”
Section: Discussionmentioning
confidence: 99%
“…Geometric [22] Sensitive to noise and Outlier into the data Complexity is exponential to the number of subspace dimension. Statistical Principal Component Analysis [23] The number and dimensions of the subspaces should be known Multistage learning [24] [25] Sensitive to initialization. Robust Statistical Approach [26] Dimension of subspace need to be known and equal agglomerative lossy compression [27] Complexity is exponential to the number of subspace dimension There is No Theoretical proof for the optimality of the agglomerative algorithm.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…The mainlobe interference with the high power can be treated as the principal component of the echo. So the principal component analysis (PCA) algorithm can be exploited to extract the mainlobe interference [6, 7]. The complete signal processing structure of the proposed method is showed in Fig.…”
Section: Introductionmentioning
confidence: 99%