2016
DOI: 10.1137/16m1058467
|View full text |Cite
|
Sign up to set email alerts
|

A Distributed and Incremental SVD Algorithm for Agglomerative Data Analysis on Large Networks

Abstract: In this paper it is shown that the SVD of a matrix can be constructed efficiently in a hierarchical approach. The proposed algorithm is proven to recover the singular values and left singular vectors of the input matrix A if its rank is known. Further, the hierarchical algorithm can be used to recover the d largest singular values and left singular vectors with bounded error. It is also shown that the proposed method is stable with respect to roundoff errors or corruption of the original matrix entries. Numeri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(44 citation statements)
references
References 31 publications
0
44
0
Order By: Relevance
“…However, no bounds on the number of obtained (local and final) POD are given. [89] investigates distributed SVD computation on tree structures and features an error bound but for a given target reduced rank. In comparison, the HAPOD prescribes a (mean projection) error bound by which the rank of the decomposition is determined.…”
Section: Related Workmentioning
confidence: 99%
“…However, no bounds on the number of obtained (local and final) POD are given. [89] investigates distributed SVD computation on tree structures and features an error bound but for a given target reduced rank. In comparison, the HAPOD prescribes a (mean projection) error bound by which the rank of the decomposition is determined.…”
Section: Related Workmentioning
confidence: 99%
“…Third, it could be rewarding to also consider versions of OMS for additional empirical GMRA variants including, e.g., those which rely on adaptive constructions [37], GMRA constructions in which subspaces that minimize different criteria are used to approximate the data in each partition element (see, e.g., [27]), and distributed GMRA constructions which are built up across networks using distributed clustering [4] and SVD [30] algorithms. Such variants could prove valuable with respect to reducing the overall computational storage and/or runtime requirements of OMS in different practical situations.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the SVD cannot be computed without compromising the privacy of the data as it is required to have all data stored in a single location. Nonetheless, the distributed calculation of the SVD can be performed if we make use of some results presented by Iwen et al 60 They propose the partition of the original matrix boldX, on which we want to apply the decomposition, into k block matrices, that is. boldX=[X1X2Xk], to, subsequently, compute in parallel the SVD decomposition of each of these blocks.…”
Section: Privacy‐preserving Training Algorithm For Autoencodersmentioning
confidence: 99%