1983
DOI: 10.1214/aos/1176346060
|View full text |Cite
|
Sign up to set email alerts
|

On the Convergence Properties of the EM Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
1,531
0
27

Year Published

1995
1995
2017
2017

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 2,691 publications
(1,567 citation statements)
references
References 17 publications
9
1,531
0
27
Order By: Relevance
“…, y * (M ) i,mis . Thus, by Theorem 2, the sequence l * obs (θ (t) ) is monotonically increasing and, under the conditions stated in Wu (1983), the convergence ofθ (t) to a stationary point follows for fixed M . Theorem 2 does not hold for the sequence obtained from the Monte Carlo EM method for fixed M , because the imputed values are re-generated for each E-step of the Monte Carlo EM method, and convergence is very hard to check for the Monte Carlo EM (Booth & Hobert, 1999).…”
Section: Therefore (19) Implies (20)mentioning
confidence: 79%
“…, y * (M ) i,mis . Thus, by Theorem 2, the sequence l * obs (θ (t) ) is monotonically increasing and, under the conditions stated in Wu (1983), the convergence ofθ (t) to a stationary point follows for fixed M . Theorem 2 does not hold for the sequence obtained from the Monte Carlo EM method for fixed M , because the imputed values are re-generated for each E-step of the Monte Carlo EM method, and convergence is very hard to check for the Monte Carlo EM (Booth & Hobert, 1999).…”
Section: Therefore (19) Implies (20)mentioning
confidence: 79%
“…These two steps will be done iteratively until it converges to a local maximum of the likelihood function (Schafer 1997). A detailed explanation on the convergence properties of EM algorithm can be found in some literature, as an example by Wu (1983).…”
Section: Expectation-maximization Algorithmmentioning
confidence: 99%
“…The EM algorithm has two steps which are applied alternately in an iterative fashion. Each step is guaranteed to increase the likelihood of the observed data, and the algorithm converges to a local maximum of the likelihood function [12,15]. The method gives for each component and each observation a probability of the observation stemming from that component.…”
Section: Probabilistic Modelingmentioning
confidence: 99%