2006
DOI: 10.1016/j.econlet.2005.09.005
|View full text |Cite
|
Sign up to set email alerts
|

A simple recursive forecasting model

Abstract: We compare the performance of alternative recursive forecasting models. A simple constant gain algorithm, used widely in the learning literature, both forecasts well out of sample and also provides the best fit to the Survey of Professional Forecasters.JEL classification codes: E37, D84, D83.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

13
137
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 228 publications
(150 citation statements)
references
References 23 publications
13
137
0
Order By: Relevance
“…Experimental evidence is provided by Heemeijer et al (2004) and Hommes et al (2005). This paper extends Branch (2004) by enlarging the set of competing reduced form models. Our approach, like Mankiw et al (2003), tests for a particular form of sticky information flows in agents' survey expectations.…”
Section: Article In Pressmentioning
confidence: 76%
See 2 more Smart Citations
“…Experimental evidence is provided by Heemeijer et al (2004) and Hommes et al (2005). This paper extends Branch (2004) by enlarging the set of competing reduced form models. Our approach, like Mankiw et al (2003), tests for a particular form of sticky information flows in agents' survey expectations.…”
Section: Article In Pressmentioning
confidence: 76%
“…Recent approaches impose bounded rationality at the primitive level; see, for example, Mankiw and Reis (2002), Ball et al (2005), Reis (2004), Branch et al (2004) and Sims (2003). Of these the sticky information model of Mankiw and Reis (2002) yields important (and tractable) implications for macroeconomic policy.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It has also been argued by Orphanides and Williams (2005a) that constant gain learning is more reasonable than RLS learning because the learning rule itself is stationary whereas it is time-dependent in RLS. Empirical support for constant gain learning is provided in Orphanides and Williams (2005b), Evans (2006b), andMilani (2005).…”
Section: Real-time Learning With Constant Gainmentioning
confidence: 99%
“…The usual choice for this purpose has been the Least Squares (LS) algorithm (Branch and Evans, 2006;Markiewicz and Pick, 2014), possibly due to its widespread popularity between econometricians. A computationally simpler alternative is offered by the Stochastic Gradient (SG) algorithm (Barucci and Landi, 1997;Evans and Honkapohja, 1998).…”
Section: Introductionmentioning
confidence: 99%