2008
DOI: 10.1111/j.1468-0084.2008.00536.x
|View full text |Cite
|
Sign up to set email alerts
|

Encompassing and Automatic Model Selection*

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(40 citation statements)
references
References 18 publications
0
40
0
Order By: Relevance
“…For high non-centralities, the default-mode gauge is increased by about 1-2 percentage points (see §5.2). Doornik (2008) shows that encompassing checks against the GUM help stabilize performance. …”
Section: Simulation Results For N = 10mentioning
confidence: 99%
See 2 more Smart Citations
“…For high non-centralities, the default-mode gauge is increased by about 1-2 percentage points (see §5.2). Doornik (2008) shows that encompassing checks against the GUM help stabilize performance. …”
Section: Simulation Results For N = 10mentioning
confidence: 99%
“…Autometrics improves further by a treesearch to detect and eliminate statistically-insignificant variables, and handling N > T . At any stage, a variable is removed only if the new model is a valid reduction of the GUM (i.e., the new model must encompass the GUM at the chosen significance level: see Doornik, 2008). A path terminates when no variable meets the reduction criterion.…”
Section: Comparisons Of 1-cut Selection and Automated Getsmentioning
confidence: 99%
See 1 more Smart Citation
“…The general model often has more candidate variables N than observations T , so block contracting and expanding searches are required, as implemented in Autometrics: see Doornik (2009), andDoornik (2007). Once a feasibly estimable model is obtained with n << T , it is evaluated for congruence by a range of misspecification tests, and if not rejected, reduction continues till no insignificant variables remain, then that terminal model is checked by encompassing: see Doornik (2008).…”
Section: Introductionmentioning
confidence: 99%
“…For model selection, we follow an automatic selection method along the lines of Doornik (2008), Hyndman and Athanasopoulos (2013), and Hendry and Doornik (2014). More precisely, we perform a backwards stepwise regression starting with a model containing all potential predictors and removing one predictor at a time.…”
Section: Forecasting Models' Comparisonmentioning
confidence: 99%