2011
DOI: 10.1198/jasa.2011.tm09478
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Weight Choice for Frequentist Model Average Estimators

Abstract: There has been increasing interest recently in model averaging within the frequentist paradigm. The main benefit of model averaging over model selection is that it incorporates rather than ignores the uncertainty inherent in the model selection process. One of the most important, yet challenging, aspects of model averaging is how to optimally combine estimates from different models. In this work, we suggest a procedure of weight choice for frequentist model average estimators that exhibits optimality propertie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
110
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 195 publications
(110 citation statements)
references
References 28 publications
0
110
0
Order By: Relevance
“…The narrow model corresponds to S = ∅. Since our method of finding weights is based on minimizing a mean squared error expression, see also Liang et al (2011), this setting is justified since it balances the squared bias and the variance of the estimators in order for the mean squared error to be computable. Indeed, when not working under local misspecification, for a fixed true model not contained in the set of studied models asymptotically the bias would dominate, pointing toward always working with the most complicated model (Claeskens and Hjort (2008)).…”
Section: Notation and Settingmentioning
confidence: 99%
See 4 more Smart Citations
“…The narrow model corresponds to S = ∅. Since our method of finding weights is based on minimizing a mean squared error expression, see also Liang et al (2011), this setting is justified since it balances the squared bias and the variance of the estimators in order for the mean squared error to be computable. Indeed, when not working under local misspecification, for a fixed true model not contained in the set of studied models asymptotically the bias would dominate, pointing toward always working with the most complicated model (Claeskens and Hjort (2008)).…”
Section: Notation and Settingmentioning
confidence: 99%
“…We investigate the finite sample performance of the minimum MSE estimator (mMSE) and compare the results with other methods of averaging, in particular by the plug-in estimator (Liu (2015)), the so-called optimal estimator (OPT, Liang et al (2011)), Mallows model averaging (MMA, Hansen (2007)) and jackknife model averaging (JMA, Hansen and Racine (2012)). All of these methods are defined for averaging over all possible submodels.…”
Section: Linear Modelsmentioning
confidence: 99%
See 3 more Smart Citations