2021
DOI: 10.48550/arxiv.2109.06610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Statistical limits of dictionary learning: random matrix theory and the spectral replica method

Abstract: We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size. This is in contrast with most existing literature concerned with the low-rank (i.e., constant-rank) regime. We first consider a class of rotationally invariant matrix denoising problems whose mutual information and minimum mean-square error are computable using standard techniques from random ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(20 citation statements)
references
References 92 publications
(190 reference statements)
0
20
0
Order By: Relevance
“…Finally, another very recent work, [BM21], came out during the completion of the present paper. This work develops a "spectral replica" calculation for extensive-rank matrix factorization with non-Gaussian prior and (only) Gaussian channel.…”
Section: Related Workmentioning
confidence: 87%
See 1 more Smart Citation
“…Finally, another very recent work, [BM21], came out during the completion of the present paper. This work develops a "spectral replica" calculation for extensive-rank matrix factorization with non-Gaussian prior and (only) Gaussian channel.…”
Section: Related Workmentioning
confidence: 87%
“…This hint is strengthened by recent results in[BM21] using the spectral replica method: in the special case of Gaussian channels Pout, it analytically computes the asymptotic mean free energy with the replica method, and its results strongly suggest that the proper order parameter is indeed a probability measure of eigenvalues.…”
mentioning
confidence: 86%
“…Then, the ℓBNN problem is closely related to the random-design linear-rank matrix inference task, which is known to be challenging to analyze [34]- [36]. We direct the interested reader to recent works by Barbier and Macris [35] and by Maillard et al [36] on this problem, and defer more detailed analysis to future work.…”
Section: N N D → ∞ and P Fixedmentioning
confidence: 99%
“…Linear models always carry information about the model parameters. In general, the overlap is a decreasing function of the load R: For smaller R the problem is better determined, and hence the learned labels are more correlated to the true symbols 17 . This is shown in Fig.…”
Section: A Classical Case: Linear Generative Modelmentioning
confidence: 99%