2018
DOI: 10.48550/arxiv.1810.04090
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Statistical Convergence of the EM Algorithm on Gaussian Mixture Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…It guarantees exact recovery as σ → 0. This is the first result showing that the statistical error rate of the EM algorithm does not depend on the distance between any two regression vectors in noisy environment, as opposed to all previous analysis on EM [3,4,8,9,18]. We provide the detailed discussion on this issue in Section 5.…”
Section: Remarkmentioning
confidence: 78%
See 1 more Smart Citation
“…It guarantees exact recovery as σ → 0. This is the first result showing that the statistical error rate of the EM algorithm does not depend on the distance between any two regression vectors in noisy environment, as opposed to all previous analysis on EM [3,4,8,9,18]. We provide the detailed discussion on this issue in Section 5.…”
Section: Remarkmentioning
confidence: 78%
“…In the special case of two balanced mixtures, global convergence results have been established in [6,7] for GMMs, and in [2] for MLR. Beyond more than two components, a negative result for global convergence of the EM algorithm for 3-GMM has been established [6], while [8,9] give a local convergence result for k-GMM with arbitrary k ≥ 3. Attempts have been made to obtain analogous results for mixed linear regression.…”
Section: Related Workmentioning
confidence: 99%
“…Results of this nature were then generalized to the k mixture of linear regression in [24]. The k-GM for a general k ≥ 2 was studied in [41,45], which provided results comparable to [3] for gradient EM under minimal separation condition between the means, and closeness of the initial guess to the true means. Beyond the i.i.d.…”
Section: Other Known Resultsmentioning
confidence: 99%
“…The EM algorithm is a commonly used approach when dealing with latent variables and missing values [49,47,33,16]. The statistical properties, such as the local convergence and minimax optimality of standard EM algorithm has been recently studied in [6,56,60,9,63], while the development of differentially private EM algorithm, especially the theories of the optimal trade-off between privacy and accuracy, is still largely unexplored. In this paper, we propose novel differentially private EM algorithms in both the classic low-dimensional setting and the contemporary high-dimensional setting, where the dimension is much larger than the sample size.…”
Section: Introductionmentioning
confidence: 99%