Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing 2014
DOI: 10.1145/2591796.2591881
|View full text |Cite
|
Sign up to set email alerts
|

Smoothed analysis of tensor decompositions

Abstract: Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness results that hold for tensors give them a significant advantage over matrices. However, tensors pose serious algorithmic challenges; in particular, much of the matrix algebra toolkit fails to generalize to tensors. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
82
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 96 publications
(83 citation statements)
references
References 31 publications
1
82
0
Order By: Relevance
“…These algorithms are typically based on finding a rank-one decomposition of the empirical 3rd or 4th moment tensor of the mixture; they heavily use the special structure of these moments for Gaussian mixtures. One paper we highlight is [BCMV14], which also uses much higher moments of the distribution. They show that in the smoothed analysis setting, the ℓth moment tensor of the distribution has algebraic structure which can be algorithmically exploited to recover the means.…”
Section: Related Workmentioning
confidence: 99%
“…These algorithms are typically based on finding a rank-one decomposition of the empirical 3rd or 4th moment tensor of the mixture; they heavily use the special structure of these moments for Gaussian mixtures. One paper we highlight is [BCMV14], which also uses much higher moments of the distribution. They show that in the smoothed analysis setting, the ℓth moment tensor of the distribution has algebraic structure which can be algorithmically exploited to recover the means.…”
Section: Related Workmentioning
confidence: 99%
“…For a given matrix Φ, the K-rank τ (Φ) is defined as the largest k for which every n × k submatrix of Φ has its smallest singular value larger than 1/τ . In [25], it is shown that the τ -robust Krank is super-additive, implying that the K-rank τ of the Khatri-Rao product is strictly larger than individual K-rank τ s of the input matrices.…”
Section: B Related Workmentioning
confidence: 99%
“…16 Using tensoring to produce linearly independent vectors has also been studied recently in the context of real vectors [BCMV14].…”
Section: Random Submatrices Of E(m R)mentioning
confidence: 99%