2019
DOI: 10.48550/arxiv.1905.12106
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EM Converges for a Mixture of Many Linear Regressions

Jeongyeol Kwon,
Constantine Caramanis

Abstract: We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number k of components. We show that as long as signalto-noise ratio (SNR) is Ω(k), well-initialized EM converges to the true regression parameters. Previous results for k ≥ 3 have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results enlarge the scope to the environment with noises, and notably, we establish a statistical error ra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…In particular, if ∆ = Ω(1), we again attain runtime which is sub-exponential in k. In the special case when the mixing weights are all known, and assuming that ς = O( ∆ k 2 polylog(k) ), by combining this result with the local convergence result of [KC19], we can again attain arbitrarily good accuracy by slightly increasing the runtime; see Section 8.5 and Theorem 8.33 for details.…”
Section: Our Contributionsmentioning
confidence: 90%
See 2 more Smart Citations
“…In particular, if ∆ = Ω(1), we again attain runtime which is sub-exponential in k. In the special case when the mixing weights are all known, and assuming that ς = O( ∆ k 2 polylog(k) ), by combining this result with the local convergence result of [KC19], we can again attain arbitrarily good accuracy by slightly increasing the runtime; see Section 8.5 and Theorem 8.33 for details.…”
Section: Our Contributionsmentioning
confidence: 90%
“…In Section 8.4 we prove Theorem 8.1. Finally, in Section 8.5, we briefly describe how to leverage the local convergence result of [KC19] in conjunction with our algorithm to get improved noise tolerance in the setting where the mixing weights are a priori known.…”
Section: Learning All Components Under Noisementioning
confidence: 99%
See 1 more Smart Citation