2021
DOI: 10.48550/arxiv.2109.09859
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sharp global convergence guarantees for iterative nonconvex optimization: A Gaussian process perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…First, we establish entrywise guarantees on the EM algorithm-our results hold in the stronger ∞ -norm, not just the 2 -norm. Second, and in contrast to several results in the Gaussian case for mixtures of linear regressions (Kwon et al, 2019;Chandrasekher et al, 2021), we do not assume that each iterate of the EM algorithm takes in a fresh sample, i.e., our convergence guarantee is not based on sample-splitting 2 . Third, our result for the 2 -norm is sharp in that it also is optimal in terms of the constant factor, while most previous results for EM only achieve the optimal rate up to constant factors.…”
Section: Contributions and Organizationmentioning
confidence: 95%
See 1 more Smart Citation
“…First, we establish entrywise guarantees on the EM algorithm-our results hold in the stronger ∞ -norm, not just the 2 -norm. Second, and in contrast to several results in the Gaussian case for mixtures of linear regressions (Kwon et al, 2019;Chandrasekher et al, 2021), we do not assume that each iterate of the EM algorithm takes in a fresh sample, i.e., our convergence guarantee is not based on sample-splitting 2 . Third, our result for the 2 -norm is sharp in that it also is optimal in terms of the constant factor, while most previous results for EM only achieve the optimal rate up to constant factors.…”
Section: Contributions and Organizationmentioning
confidence: 95%
“…As we see in the figure, the EM algorithm achieves a small error θ (T ) − θ * 2 2 with high probability 6 even if η is close to 1. We remark that there is a sizable literature on the success of iterative algorithms with random initialization for a variety of problems, such as phase retrieval , Gaussian mixtures (Dwivedi et al, 2020;Wu and Zhou, 2021), mixtures of logconcave distributions , and general regression models with Gaussian covariates (Chandrasekher et al, 2021). However, Gaussianity or continuous density is typically part of the assumption, and analyzing iterative algorithms beyond the Gaussian setting appears to be a generally more challenging problem.…”
Section: Failure Of Random Initializationmentioning
confidence: 99%
“…We analyze the cross-fitted AIPW estimator in the "proportional asymptotic regime", where the number of observations n and features p both diverge, with the ratio p/n converging to some constant κ > 0. This regime has attracted considerable recent attention in high-dimensional statistics [12, 13, 15-18, 23, 25, 29, 30, 38-40, 42, 44, 46, 55, 59, 61, 65, 82, 84, 94, 102, 104, 107, 109, 115, 117, 118], statistical machine learning and analysis of algorithms [31,36,60,67,68,71,72,74,79], econometrics [4,5,14,[26][27][28]51] etc, and shares roots with probability theory and statistical physics [78,120]. Asymptotic approximations derived under this regime demonstrate commendable performance even under moderate sample sizes (c.f.…”
mentioning
confidence: 99%