Proceedings of the 2017 SIAM International Conference on Data Mining 2017
DOI: 10.1137/1.9781611974973.60
|View full text |Cite
|
Sign up to set email alerts
|

A Deflation Method for Structured Probabilistic PCA

Abstract: Modern treatments of structured Principal Component Analysis often focus on the estimation of a single component under various assumptions or priors, such as sparsity and smoothness, and then the procedure is extended to multiple components by sequential estimation interleaved with deflation. While prior work has highlighted the importance of proper deflation for ensuring the quality of the estimated components, to our knowledge, proposed techniques have only been developed and applied to non-probabilistic pri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…To avoid the requirement of supervision, example-based explanation methods select representative exemplars that summarize data distribution (Kim, Khanna, and Koyejo 2016;Khanna et al 2019;Cho et al 2021). Despite the advantages of their unsupervised algorithms, there is no guarantee that the exemplars precisely reflect the decision logic of the model.…”
Section: Related Workmentioning
confidence: 99%
“…To avoid the requirement of supervision, example-based explanation methods select representative exemplars that summarize data distribution (Kim, Khanna, and Koyejo 2016;Khanna et al 2019;Cho et al 2021). Despite the advantages of their unsupervised algorithms, there is no guarantee that the exemplars precisely reflect the decision logic of the model.…”
Section: Related Workmentioning
confidence: 99%
“…When analyzing the model parameters, we employ the principal component analysis (PCA) [10,12] method, which is a commonly used data-mapping method in machine learning. The PCA method mainly finds the eigenvectors corresponding to the largest eigenvalues of the covariance matrix of the dataset, thereby finding the directions with the largest data variance, reducing the dimensionality of the data, and reducing an n-dimensional vector to a d-dimension, wherein d < n. Fig.…”
Section: Principal Component Analysismentioning
confidence: 99%
“…where x, w are vectors. The general matrix case follows using standard deflation techniques for multiple factors [8,21,9]. The generative model for the observed data matrix is T = xw + , where ∼ N (0, σ 2 ).…”
Section: Group Sparse Probabilistic Principal Components Analysismentioning
confidence: 99%