2021
DOI: 10.48550/arxiv.2110.08775
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Perturbative construction of mean-field equations in extensive-rank matrix factorization and denoising

Antoine Maillard,
Florent Krzakala,
Marc Mézard
et al.

Abstract: Factorization of matrices where the rank of the two factors diverges linearly with their sizes has many applications in diverse areas such as unsupervised representation learning, dictionary learning or sparse coding. We consider a setting where the two factors are generated from known componentwise independent prior distributions, and the statistician observes a (possibly noisy) componentwise function of their matrix product. In the limit where the dimensions of the matrices tend to infinity, but their ratios… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
15
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(17 citation statements)
references
References 24 publications
2
15
0
Order By: Relevance
“…The natural setup for this joint asymptotic limit is a random design teacherstudent setting, in which the input examples are independent and identically distributed samples from some distribution and the targets are generated by a linear model with random coefficient matrix. Then, the ℓBNN problem is closely related to the random-design linear-rank matrix inference task, which is known to be challenging to analyze [34]- [36]. We direct the interested reader to recent works by Barbier and Macris [35] and by Maillard et al [36] on this problem, and defer more detailed analysis to future work.…”
Section: N N D → ∞ and P Fixedmentioning
confidence: 99%
See 1 more Smart Citation
“…The natural setup for this joint asymptotic limit is a random design teacherstudent setting, in which the input examples are independent and identically distributed samples from some distribution and the targets are generated by a linear model with random coefficient matrix. Then, the ℓBNN problem is closely related to the random-design linear-rank matrix inference task, which is known to be challenging to analyze [34]- [36]. We direct the interested reader to recent works by Barbier and Macris [35] and by Maillard et al [36] on this problem, and defer more detailed analysis to future work.…”
Section: N N D → ∞ and P Fixedmentioning
confidence: 99%
“…Then, the ℓBNN problem is closely related to the random-design linear-rank matrix inference task, which is known to be challenging to analyze [34]- [36]. We direct the interested reader to recent works by Barbier and Macris [35] and by Maillard et al [36] on this problem, and defer more detailed analysis to future work.…”
Section: N N D → ∞ and P Fixedmentioning
confidence: 99%
“…are recovered wrongly, leading to m ⋆ < 1. 18 Here, σ 2 is small but fixed. However, by setting R small enough, we can send Cm⋆ very close to one.…”
Section: A Classical Case: Linear Generative Modelmentioning
confidence: 99%
“…Further, Park et al [37] developed bilinear GAMP (BiG-AMP) which extends the GAMP algorithm to bilinear model in which both the signal of interest and measurement matrix are unknown. Recent works showed that the BiG-AMP can be obtained by Plefka-Georges-Yedidia method [38], [39]. Following VAMP, [40], [41] developed a generalized linear model VAMP (GLM-VAMP) algorithm by constructing an equivalent linear model.…”
Section: Introductionmentioning
confidence: 99%