2009
DOI: 10.1017/s0266466609100105
|View full text |Cite
|
Sign up to set email alerts
|

Recursive Forecast Combination for Dependent Heterogeneous Data

Abstract: This paper studies a procedure to combine individual forecasts that achieve theoretical optimal performance. The results apply to a wide variety of loss functions and only require a tail condition on the data sequences. The theoretical results show that the bounds are also valid in the case of time varying combination weights.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
19
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(19 citation statements)
references
References 33 publications
0
19
0
Order By: Relevance
“…In addition, macroeconometric models are likely to be subject to structural breaks due to policy changes and shifts in tastes and technology. As Clements and Hendry (1998, 1999, 2006 emphasise, structural breaks are often the main source of forecast failure and represent the most serious form of model uncertainty.…”
Section: Forecasting With the Vecx*(22) Modelmentioning
confidence: 99%
“…In addition, macroeconometric models are likely to be subject to structural breaks due to policy changes and shifts in tastes and technology. As Clements and Hendry (1998, 1999, 2006 emphasise, structural breaks are often the main source of forecast failure and represent the most serious form of model uncertainty.…”
Section: Forecasting With the Vecx*(22) Modelmentioning
confidence: 99%
“…The MLS method is implemented according to Algorithm 1 in Sancetta (2010). The core step in the algorithm is to compute the currentperiod weight (before shrinkage) where is the learning rate parameter, and is a parameter that controls the speed of learning.…”
Section: Measuring the Performances Of Combination Methodsmentioning
confidence: 99%
“…In addition to AFTER, we consider another on-line recursive algorithm from the machine learning literature with shrinking (henceforth MLS) due to Sancetta (2010). Unlike the BG approach, the on-line algorithms tend to select the few top forecasters, thus requiring much smaller number of estimated parameters.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Sancetta [17] assumed that the tails of the target variables are no heavier than exponential decays, which restricts the heaviness of the tails of the forecast errors. Wei & Yang [12] designed a method for errors heavier than the normal distributions, but not heavier than the double-exponential distributions.…”
mentioning
confidence: 99%