2005
DOI: 10.3386/t0317
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Stochastic Gradient Learning

Abstract: We study the properties of generalized stochastic gradient (GSG) learning in forwardlooking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
34
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(35 citation statements)
references
References 22 publications
1
34
0
Order By: Relevance
“…Parameters estimated with both recursive least squares and Bayesian learning converge to rational expectations equilibrium. 5 However, it is also evident that the dynamic paths of a t and b t di¤er-and that these di¤erences decrease over time. Figure 2 depicts …rst 100 periods from the same simulation.…”
Section: Bayesian Learning Dynamics Can Di¤er From Rlsmentioning
confidence: 99%
“…Parameters estimated with both recursive least squares and Bayesian learning converge to rational expectations equilibrium. 5 However, it is also evident that the dynamic paths of a t and b t di¤er-and that these di¤erences decrease over time. Figure 2 depicts …rst 100 periods from the same simulation.…”
Section: Bayesian Learning Dynamics Can Di¤er From Rlsmentioning
confidence: 99%
“…Here the "small constant gain" approximation (i.e. small   (1)) is employed, making the framework similar to that used in Evans, Honkapohja, and Williams (2010). Formally, the setting is now…”
Section: Additional Interpretation and Analysismentioning
confidence: 99%
“…An alternative set-up would be to allow time variation in . Papers by Bullard (1992), McGough (2003), Sargent and Williams (2005), and Evans, Honkapohja, and Williams (2010) look at this issue in models with learning. Cogley and Sargent (2005) look at empirical time-varying parameter models without learning.…”
Section: Priors On Parameter Variationmentioning
confidence: 99%
See 1 more Smart Citation
“…Sargent and Williams (2005); Evans et al (2010) establish a Bayesian interpretation of learning. Other than relating the learning algorithm to the Kalman filter, the Bayesian optimal estimator of a random walk time-varying parameters model, the priors also carry implications about the specific calibrations of the learning gains.…”
Section: Related Literature Theoretical Foundationsmentioning
confidence: 99%