2018
DOI: 10.1109/tit.2018.2799931
|View full text |Cite
|
Sign up to set email alerts
|

Minimax Lower Bounds on Dictionary Learning for Tensor Data

Abstract: This paper provides fundamental limits on the sample complexity of estimating dictionaries for tensor data. The specific focus of this work is on Kth-order tensor data and the case where the underlying dictionary can be expressed in terms of K smaller dictionaries. It is assumed the data are generated by linear combinations of these structured dictionary atoms and observed through white Gaussian noise. This work first provides a general lower bound on the minimax risk of dictionary learning for such tensor dat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 38 publications
(97 reference statements)
0
17
0
Order By: Relevance
“…a lower bound for the minimax risk ε * is attained. A formal proof of Theorem 1 relies on the following lemmas whose proofs appear in the full version of this work [17]. Note that since our construction of D L is more complex than the vector case [16, Theorem 1], it requires a different sequence of lemmas, with the exception of Lemma 3, which follows from the vector case.…”
Section: Lower Bound For General Coefficientsmentioning
confidence: 99%
See 1 more Smart Citation
“…a lower bound for the minimax risk ε * is attained. A formal proof of Theorem 1 relies on the following lemmas whose proofs appear in the full version of this work [17]. Note that since our construction of D L is more complex than the vector case [16, Theorem 1], it requires a different sequence of lemmas, with the exception of Lemma 3, which follows from the vector case.…”
Section: Lower Bound For General Coefficientsmentioning
confidence: 99%
“…We state a variation of Lemma 2 necessary for the proof of Theorem 2. The proof of the lemma is again provided in [17].…”
Section: A Sparse Gaussian Coefficientsmentioning
confidence: 99%
“…Theoretical analysis suggests that the sample complexity of tensor-structured dictionary learning can be significantly lower than that for unstructured, vector data; see [113] for 2D data, and [114] for N -order tensor data. This suggests better performance is achievable with separable dictionary learning from tensor data compared to vector-based dictionary learning methods.…”
Section: E Tensor-structured Dictionary Learningmentioning
confidence: 99%
“…Beyond scalability, separable dictionaries have been shown to be theoretically appealing when dealing with tensor-valued observations. According to [24], the necessary number of samples for accurate (up to a given error) reconstruction of a Kronecker-structured dictionary within a local neighborhood scales with the sum of the product of the dimensions of the constituting dictionaries when learning on tensor data (see [25] for 2-dimensional data, and [26], [27] for N-order tensor data), compared to the product for vectorized observations. This suggests better performance is achievable compared to classical methods on tensor observations.…”
Section: Limits Of the Classical Approachmentioning
confidence: 99%