2019
DOI: 10.1137/17m1152371
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Efficient Representations in Large-Scale Tensor Decompositions

Abstract: Decomposing tensors into simple terms is often an essential step to discover and understand underlying processes or to compress data. However, storing the tensor and computing its decomposition is challenging in a large-scale setting. Though, in many cases a tensor is structured, i.e., it can be represented using few parameters: a sparse tensor is determined by the positions and values of its nonzeros, a polyadic decomposition by its factor matrices, a Tensor Train by its core tensors, a Hankel tensor by its g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 27 publications
(23 citation statements)
references
References 86 publications
(157 reference statements)
0
23
0
Order By: Relevance
“…These mtkrprod can be computed efficiently without explicitly constructing the Khatri-Rao products C B and so on. Various implementations emerged that avoid permuting the tensor in memory [57,58], that avoid communication [59], that reuse intermediate results [58], that exploit sparsity by only computing rows of the Khatri-Rao product corresponding to nonzero entries [60,61], or that exploit structure in the tensor such as tensor train or Hankel structure [60,62].…”
Section: Gradientmentioning
confidence: 99%
See 4 more Smart Citations
“…These mtkrprod can be computed efficiently without explicitly constructing the Khatri-Rao products C B and so on. Various implementations emerged that avoid permuting the tensor in memory [57,58], that avoid communication [59], that reuse intermediate results [58], that exploit sparsity by only computing rows of the Khatri-Rao product corresponding to nonzero entries [60,61], or that exploit structure in the tensor such as tensor train or Hankel structure [60,62].…”
Section: Gradientmentioning
confidence: 99%
“…Consider, for example, a nonnegativity constraint, i.e., A ≥ 0, in which ≥ holds entry-wise, and the compression matrix U such that U = A, then the constraint ≥ 0 does not imply that A ≥ 0. The structured tensor decomposition framework proposed in [62] avoids this by exploiting the efficient representation of a tensor while keeping the original factor matrices, i.e., A, B and C, in the optimization problem. Instead of working with the original T , its truncated MLSVD S…”
Section: Large-scale Computationsmentioning
confidence: 99%
See 3 more Smart Citations