2012
DOI: 10.1137/110836067
|View full text |Cite
|
Sign up to set email alerts
|

A New Truncation Strategy for the Higher-Order Singular Value Decomposition

Abstract: We present an alternative strategy to truncate the higher-order singular value decomposition (T-HOSVD). An error expression for an approximate Tucker decomposition with orthogonal factor matrices is presented, leading us to propose a novel truncation strategy for the HOSVD, which we refer to as the sequentially truncated higher-order singular value decomposition (ST-HOSVD). This decomposition retains several favorable properties of the T-HOSVD, while reducing the number of operations to compute the decompositi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
258
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 250 publications
(259 citation statements)
references
References 51 publications
1
258
0
Order By: Relevance
“…Introduction. Data-sparse representations of elements living in the tensor product of (finite dimensional) vector spaces have become an intensely studied subject in recent years, yielding a myriad of representations, such as the higher-order singular value decomposition [23,55,56], CANDECOMP/PARAFAC decomposition (CPD) [15,31], Block-term decompositions [22], H-Tucker [28,30] and Tensor Trains [44], each with varying assumptions and divergent applications. Among these, the CPD is the oldest; according to [11], its roots (for symmetric tensors) can be traced back to algebraic geometry in the middle of 19th century as featured in the work of Sylvester.…”
mentioning
confidence: 99%
“…Introduction. Data-sparse representations of elements living in the tensor product of (finite dimensional) vector spaces have become an intensely studied subject in recent years, yielding a myriad of representations, such as the higher-order singular value decomposition [23,55,56], CANDECOMP/PARAFAC decomposition (CPD) [15,31], Block-term decompositions [22], H-Tucker [28,30] and Tensor Trains [44], each with varying assumptions and divergent applications. Among these, the CPD is the oldest; according to [11], its roots (for symmetric tensors) can be traced back to algebraic geometry in the middle of 19th century as featured in the work of Sylvester.…”
mentioning
confidence: 99%
“…In particular, for mrank-(1, R, R) approximations it was shown to perform at least as well as THOSVD. Simulation results performed in [11] suggest that this superiority holds in most cases. For the particular case of rank-one approximations, [12] has come up with a two-stage algorithm called sequential rank-one approximation and projection (SeROAP), which first reduces dimensionality just like SeMP and then performs a sequence of projections for refining the approximation.…”
Section: Introductionmentioning
confidence: 87%
“…Though it is suboptimal, its error is bounded by a multiple of the optimal value. The alternative proposed in [11], which we refer to as sequentially optimal modal projections (SeMP), proceeds similarly but computes the modal projectors in a sequential fashion. This leads to the same error bound, but at a smaller cost due to the reduced size of the SVDs.…”
Section: Introductionmentioning
confidence: 99%
“…Typical 3D Tucker compression approaches involve knowing in advance the desired core tensor size, so that either a) a rank-R 1 R 2 R 3 approximation can be directly computed [16]; or b) a subset of an existing approximation is selected in favor of a lower memory usage, and at the cost of reduced reconstruction quality. Setting the target ranks R i beforehand simplifies the problem and can be used for a faster decomposition (Vannieuwenhoven et al [28]). However, selecting a meaningful number of ranks is a nontrivial issue [12].…”
Section: Related Workmentioning
confidence: 99%