2019
DOI: 10.48550/arxiv.1912.04007
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Subspace power method for symmetric tensor decomposition and generalized PCA

Joe Kileel,
João M. Pereira

Abstract: We introduce the Subspace Power Method (SPM) for decomposing symmetric even-order tensors. This algorithm relies on a power method iteration applied to a modified tensor, constructed from a matrix flattening. For tensors of rank up to roughly the square root of the number of tensor entries, we obtain provable guarantees for most steps of SPM, by drawing on results from algebraic geometry and dynamical systems. Numerical simulations indicate that SPM significantly outperforms state-of-the-art algorithms in term… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 54 publications
0
11
0
Order By: Relevance
“…For n = 4 and r = O(n 2 ), the work [18] presented a provable algorithm using matrix eigendecompositions, which was robustified using ideas from the sums-of-squares hierarchy in [19]. In [20], a tensor power method was used, constructed from a matrix flattening of T , to find the components w i sequentially. Also, [21] showed how to implement direct optimization of (1) in an efficient manner for moment tensors T in an online setting.…”
Section: Tensor Preliminariesmentioning
confidence: 99%
“…For n = 4 and r = O(n 2 ), the work [18] presented a provable algorithm using matrix eigendecompositions, which was robustified using ideas from the sums-of-squares hierarchy in [19]. In [20], a tensor power method was used, constructed from a matrix flattening of T , to find the components w i sequentially. Also, [21] showed how to implement direct optimization of (1) in an efficient manner for moment tensors T in an online setting.…”
Section: Tensor Preliminariesmentioning
confidence: 99%
“…, w m ⊗ w m } and, hence, multiple samples of independent Hessians allow to compute an approximating subspace W ≈ W. The construction of such a subspace is based exclusively on second order information and differs from the tensor approach in [30], which uses higher order tensor decomposition. The identification of the weights is then performed by projected gradient ascent, the so-called subspace power method [22,32,33], seeking for solutions of…”
Section: Introductionmentioning
confidence: 99%
“…where W is a suitable subspace of symmetric tensors, P W is the orthoprojector onto W, and • is the tensor spectral norm. The same type of optimization is used for efficiently computing symmetric rank-1 tensor decompositions [34]. Similarly, finding the largest and smallest Z-eigenvalues of an even order symmetric tensor [50] is equivalent to calculate the maximum and minimum values of a homogeneous polynomial associated with a tensor on a unit sphere, respectively.…”
Section: Introductionmentioning
confidence: 99%