2018
DOI: 10.1007/978-3-319-93764-9_42
|View full text |Cite
|
Sign up to set email alerts
|

Learning Fast Dictionaries for Sparse Representations Using Low-Rank Tensor Decompositions

Abstract: To cite this version:Cassio Fraga Dantas, Jérémy Cohen, Rémi Gribonval. Learning fast dictionaries for sparse representations using low-rank tensor decompositions. Abstract.A new dictionary learning model is introduced where the dictionary matrix is constrained as a sum of R Kronecker products of K terms. It offers a more compact representation and requires fewer training data than the general dictionary learning model, while generalizing Tucker dictionary learning. The proposed Higher Order Sum of Kroneckers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Earlier results in [14] are similar to SuKro and also propose a non-separable version based on CP decomposition. The latest in this line of work is a DL algorithm to learn sums of R Kronecker products of K terms [22].…”
Section: Methodsmentioning
confidence: 99%
“…Earlier results in [14] are similar to SuKro and also propose a non-separable version based on CP decomposition. The latest in this line of work is a DL algorithm to learn sums of R Kronecker products of K terms [22].…”
Section: Methodsmentioning
confidence: 99%
“…To address this problem, the tensor-based dictionary learning algorithms [183,184,185,186,187] are considered as an alternative approach to this type of training data. Additional algorithms in this category are CPD-based dictionary learning [188,189,190,191] and Tucker-based dictionary learning [192,193,194,195] where the algorithms utilize similar formulations of Equation (3.8) or Equation (3.9) based on multi-dimensional representations with sparsity constraints.…”
Section: Dictionary Learningmentioning
confidence: 99%
“…The other batch algorithm we propose, named TeFDiL, learns subdictionaries of the LSR dictionary by exploiting the connection to tensor recovery and using tensor CPD. Recently, Dantas et al [33] proposed an algorithm for learning an LSR dictionary for tensor data in which the dictionary update stage is a projected gradient descent algorithm that involves a CPD after every gradient step. In contrast, TeFDiL only requires a single CPD at the end of each dictionary update stage.…”
Section: B Relation To Prior Workmentioning
confidence: 99%