2022
DOI: 10.1016/j.sigpro.2021.108408
|View full text |Cite
|
Sign up to set email alerts
|

A linearly convergent algorithm for distributed principal component analysis

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Principal Component Analysis (PCA) is a statistical analysis that reduces the dimensionality of a data set by linear transformation and performs a comprehensive analysis of the sample ( 44 ). Correlation analysis is a straightforward analytical technique for determining the relationship between two sets of quantitative data.…”
Section: Resultsmentioning
confidence: 99%
“…Principal Component Analysis (PCA) is a statistical analysis that reduces the dimensionality of a data set by linear transformation and performs a comprehensive analysis of the sample ( 44 ). Correlation analysis is a straightforward analytical technique for determining the relationship between two sets of quantitative data.…”
Section: Resultsmentioning
confidence: 99%
“…The work in this paper is an extension of the preliminary work in [1] that proposed two fast and efficient algorithms for distributed PCA in the case of sample-wise distributed data. A distributed algorithm for PCA based on generalized Hebbian algorithm was developed and analyzed in our previous work [30]. Though it is a linearly convergent one-time scale algorithm that converges with any random initial point, it only reached to a neighborhood of the optimal solution.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
“…Estimation of the eigenvectors of C at every node without sharing raw data C i would require some form of collaboration among the nodes of the network. To this end, we proposed a combine-and-adapt strategy in our earlier work [30] that is used widely in distributed optimization literature for convex and strongly convex problems. Using that approach and an extensive analysis, we could show that even for the non-convex PCA problem, each node converges linearly and globally, i.e., starting from any random initial point, but only to a neighborhood of the true eigenvectors of the global covariance matrix C. Even though we used the generalized Hebbian algorithm, some straightforward calculations and manipulations can show similar results for Krasulina's method.…”
Section: Proposed Algorithm: Fast-pcamentioning
confidence: 99%
“…In local descriptors, the methodology is deployed on small image patches to develop its feature size. Extracting features from different image regions make local descriptors much effective than global descriptors [4][5][6]. In global descriptors, methodology is deployed on the entire image.…”
Section: Introductionmentioning
confidence: 99%