2001
DOI: 10.1109/78.923303
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of exponentiated gradient algorithms

Abstract: This paper studies three related algorithms: the (traditional) Gradient Descent (GD) Algorithm, the Exponentiated Gradient Algorithm with Positive and Negative weights (EG ¡ algorithm) and the Exponentiated Gradient Algorithm with Unnormalized Positive and Negative weights (EGU ¡ algorithm). These algorithms have been previously analyzed using the "mistake-bound framework" in the computational learning theory community. In this paper we perform a traditional signal processing analysis in terms of the mean squa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2001
2001
2008
2008

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(12 citation statements)
references
References 15 publications
(17 reference statements)
0
12
0
Order By: Relevance
“…The second point is that the term keeps the stepsizes of individual taps from going to zero as individual taps go to zero. The third point is that according to equations given in [6], [8], and [9], NSLMS and LMS have the same asymptotic MSE, so long as the stepsize µ is the same for both algorithms.…”
Section: System Model and Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…The second point is that the term keeps the stepsizes of individual taps from going to zero as individual taps go to zero. The third point is that according to equations given in [6], [8], and [9], NSLMS and LMS have the same asymptotic MSE, so long as the stepsize µ is the same for both algorithms.…”
Section: System Model and Algorithmmentioning
confidence: 99%
“…(Note that there is some similarity between NSLMS and the EG ± algorithm in equations (3) and (4) of [6].) There is of course a disadvantage: if the optimal weight vector changes drastically, then the estimate will temporarily have the wrong impression of which taps are important.…”
Section: System Model and Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Exponentiated gradient descent share similar convergence guarantees [9], but implements a different prior over the space of score functions. It places large prior weight on score functions with a great deal of dynamic range.…”
Section: Exponentiated Gradient Variantmentioning
confidence: 99%
“…The concept of exponentiated gradient adaptation can be seen by considering, for example, the exponentiated gradient algorithm with unnormalized weights (EGUAE) which is given in [15,19,16] as…”
Section: Exponentiated Gradientmentioning
confidence: 99%