2020
DOI: 10.48550/arxiv.2002.11835
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tensor Decompositions in Deep Learning

Abstract: The paper surveys the topic of tensor decompositions in modern machine learning applications. It focuses on three active research topics of significant relevance for the community. After a brief review of consolidated works on multi-way data analysis, we consider the use of tensor decompositions in compressing the parameter space of deep learning models. Lastly, we discuss how tensor methods can be leveraged to yield richer adaptive representations of complex data, including structured information. The paper c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…The popularity of image decomposition and natural language processing in the context of DL [28] has increased the demand for effective storage and support of the numerous parameters needed by deep learning algorithms. One solution to this problem is to employ sparse representation methods to compress the parameter matrix and lower storage pressure, such as tensor decomposition and matrix decomposition.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The popularity of image decomposition and natural language processing in the context of DL [28] has increased the demand for effective storage and support of the numerous parameters needed by deep learning algorithms. One solution to this problem is to employ sparse representation methods to compress the parameter matrix and lower storage pressure, such as tensor decomposition and matrix decomposition.…”
Section: Resultsmentioning
confidence: 99%
“…Vectors can benefit from the compression capabilities of matrix-TDs' reshaping and unfolding techniques, and an analysis is done to find the ideal compress ratio by reshaping. In DL [28], the input vector is reshaped into a three-way tensor, and the output tensor is unfolded into a vector. This reshaping and unfolding process reduces the number of parameters through tensor decomposition.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, Sidiropoulos et al investigated tensor factorization models and set up their relationship with signal processing and machine learning applications [12]. Bacciu et al summarized tensor decompositions in deep learning, including TTD, Canonical Polyadic (CP) decomposition, and TD, and also compared their performance on neural model compression [31]. Furthermore, our previous empirical study [19] demonstrated that TD enables a compressed DNN model, but the model cannot guarantee the DNN performance.…”
Section: Related Work and Our Contributionmentioning
confidence: 99%