1993
DOI: 10.1007/bf00773667
|View full text |Cite
|
Sign up to set email alerts
|

Model selection and prediction: Normal regression

Abstract: A b s t r a c t . This paper discusses the topic of model selection for finitedimensional normal regression models. We compare model selection criteria according to prediction errors based upon prediction with refitting, and prediction without refitting. We provide a new lower bound for prediction without refitting, while a lower bound for prediction with refitting was given by Rissanen. Moreover, we specify a set of sufficient conditions for a model selection criterion to achieve these bounds. Then the achiev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
54
0

Year Published

1996
1996
2016
2016

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 70 publications
(54 citation statements)
references
References 38 publications
0
54
0
Order By: Relevance
“…Geweke and Meese (1981) show in a stochastic regressors setup that this condition is necessary for consistent model selection. Speed and Yu (1993) also show that the BIC with C T = log T is desirable for prediction. Asymptotic efficiency of the BIC is also shown in Shao (1997).…”
Section: Introductionmentioning
confidence: 83%
“…Geweke and Meese (1981) show in a stochastic regressors setup that this condition is necessary for consistent model selection. Speed and Yu (1993) also show that the BIC with C T = log T is desirable for prediction. Asymptotic efficiency of the BIC is also shown in Shao (1997).…”
Section: Introductionmentioning
confidence: 83%
“…The model order can be chosen objectively by minimizing an information criterion function. For example, the Bayesian information criterion (BIC) [20] function is…”
Section: Theorymentioning
confidence: 99%
“…The research in Speed and Yu [12] started in 1987. The paper was possibly written in 1989, with many drafts including extensive comments by David Freedman on the first draft and it was a long story regarding why it took four years to publish.…”
mentioning
confidence: 99%
“…By then, it was well-known that AIC is prediction optimal and inconsistent (unless the true model is the largest model), while BIC is consistent when the true model is finite and one of the sub-regression models considered. Speed and Yu [12] addresses the prediction optimality question with refitting (causal or on-line prediction) and without refitting (batch prediction). A new lower bound on the latter was derived with sufficient achievability conditions, while a lower bound on the former had been given by Rissanen [8].…”
mentioning
confidence: 99%
See 1 more Smart Citation