2007
DOI: 10.1016/j.csda.2006.12.013
|View full text |Cite
|
Sign up to set email alerts
|

Acceleration schemes with application to the EM algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2007
2007
2020
2020

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 5 publications
0
16
0
Order By: Relevance
“…For this purpose, we recall in Section 2 the definition of the polynomial vector extrapolation methods of order k. The choice is motivated by the fact that these methods which combine successive Picard's iterates, are powerful methods to solve the problem (1) and that the Lemaréchal's method is one of the simplest members of this family. In Section 3, we justify the construction of the adjusted method by means of a new polynomial extrapolation transformation and we define the squaring technique for any order k. Moreover, we study the required operation count and storage.…”
Section: It Is Described Bymentioning
confidence: 99%
“…For this purpose, we recall in Section 2 the definition of the polynomial vector extrapolation methods of order k. The choice is motivated by the fact that these methods which combine successive Picard's iterates, are powerful methods to solve the problem (1) and that the Lemaréchal's method is one of the simplest members of this family. In Section 3, we justify the construction of the adjusted method by means of a new polynomial extrapolation transformation and we define the squaring technique for any order k. Moreover, we study the required operation count and storage.…”
Section: It Is Described Bymentioning
confidence: 99%
“…We initially investigated a variety of schemes to accelerate the EM iterations along the lines of Varadhan and Roland (2008) and Berlinet and Roland (2007), which while somewhat helpful did not significantly improve the computational speed. The crucial insight came again only with the realization that the task of maximizing L(f) is a convex problem.…”
Section: Non-parametric Maximum Likelihoodmentioning
confidence: 99%
“…Previously, Huang et al (2005) have reported that CTJEM converged faster than the adaptive overrelaxed EM (aEM) by for the Bayesian networks when data is highly sparse. More recently, we conducted a comprehensive empirical comparison of several new EM acceleration methods published from 2003 to 2007 Varadhan and Roland 2004;Kuroda and Sakakihara 2006;Berlinet and Roland 2007). We found that aEM still outperformed other new methods, only second to TJ 2 aEM, a variant of TJEM that we developed to accelerate aEM (Huang et al 2007a).…”
Section: Bayesian Networkmentioning
confidence: 87%