2022
DOI: 10.1016/j.ijforecast.2021.04.002
|View full text |Cite|
|
Sign up to set email alerts
|

Optimal and robust combination of forecasts via constrained optimization and shrinkage

Abstract: Combining forecasts formed by various models can substantially improve the prediction performances compared to those obtained from the individual models. Standard combination approaches consist in a forecast selection step followed by a weighting scheme. It is not clear, however, which models to include, and how to combine them. This is a central question, having a substantial impact on the quality of the aggregate forecast. We propose a robust method that mitigates estimation uncertainty and implicitly featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 54 publications
0
10
0
Order By: Relevance
“…The latter consists of combining the sample covariance matrix (which is easy to compute, asymptotically unbiased but prone to estimation errors) with an estimator that is misspecified and biased but more robust to estimation errors (Ledoit and Wolf 2004). This approach, initially derived in a portfolio optimization context, was recently transposed in Roccazzella et al (2021) to the forecast combination problem, showing that constrained optimization with shrinkage (COS) of Σ can provide a single-step, fast and robust optimal forecast combination strategy. Here we adapt the COS to act as a linear meta-learner, which combines the first-level learners only on the basis of in-sample information.…”
Section: Linear Meta-learnersmentioning
confidence: 99%
See 1 more Smart Citation
“…The latter consists of combining the sample covariance matrix (which is easy to compute, asymptotically unbiased but prone to estimation errors) with an estimator that is misspecified and biased but more robust to estimation errors (Ledoit and Wolf 2004). This approach, initially derived in a portfolio optimization context, was recently transposed in Roccazzella et al (2021) to the forecast combination problem, showing that constrained optimization with shrinkage (COS) of Σ can provide a single-step, fast and robust optimal forecast combination strategy. Here we adapt the COS to act as a linear meta-learner, which combines the first-level learners only on the basis of in-sample information.…”
Section: Linear Meta-learnersmentioning
confidence: 99%
“…However, several other alternatives are worth exploring. For example, Roccazzella et al (2021) propose a robust and sparse combination method that mitigates estimation uncertainty and implicitly features forecast selection in a single step. This approach has the additional advantage of relying on a closed-form solution for the estimation of the optimal combination strategy.…”
Section: Introductionmentioning
confidence: 99%
“…It is worth mentioning that various shrinkage and machine learning techniques have been applied to the forecasting literature since the pioneer work of Tibshirani (1996). See, e.g., Li and Chen (2014), Conflitti et al (2015), Konzen and Ziegelmann (2016), Stasinakis et al (2016), Bayer (2018), Wilms et al (2018), Kotchoni et al (2019), Coulombe et al (2020), andRoccazzella et al (2020). We complement these works by considering an ℓ 2 -relaxation of the regularized weights estimation problem, which exhibits certain optimality properties when the forecast error VC matrix can be decomposed into the sum of a low-rank matrix and a VC of idiosyncratic shocks.…”
Section: Introductionmentioning
confidence: 99%
“…It is also important to mention that ensemble models have become widespread in recent years, because of combining the advantages of multiple predictive models. The authors of [50] discussed an algorithm for selecting predictive models and combining them using Penalized constrained optimization and shrinkage techniques to predict house prices in Boston. The model consists of 14 predictive algorithms, and the full dataset includes only 507 observations.…”
mentioning
confidence: 99%
“…The model consists of 14 predictive algorithms, and the full dataset includes only 507 observations. However, the authors of article [50] also argue that there is a risk of including models in the ensemble, which would worsen the forecast results.…”
mentioning
confidence: 99%