1996
DOI: 10.1093/biomet/83.4.875
|View full text |Cite
|
Sign up to set email alerts
|

Generalised information criteria in model selection

Abstract: The problem of evaluating the goodness of statistical models is investigated from an information-theoretic point of view. Information criteria are proposed for evaluating models constructed by various estimation procedures when the specified family of probability distributions does not contain the distribution generating the data. The proposed criteria are applied to the evaluation of models estimated by maximum likelihood, robust, penahsed likelihood, Bayes procedures, etc. We also discuss the use of the boot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
321
0
1

Year Published

2005
2005
2016
2016

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 392 publications
(323 citation statements)
references
References 29 publications
1
321
0
1
Order By: Relevance
“…The parameter learning process should be therefore repeated a number of times to obtain a more stable solution. This process is also known as bootstrapping in statistics (Efron, 1979;Konishi and Kitagawa, 1996). The medians of the parameters were then calculated across repetitions for each functional category produced by each speaker.…”
Section: Analysis-by-synthesis With Stochastic Optimizationmentioning
confidence: 99%
“…The parameter learning process should be therefore repeated a number of times to obtain a more stable solution. This process is also known as bootstrapping in statistics (Efron, 1979;Konishi and Kitagawa, 1996). The medians of the parameters were then calculated across repetitions for each functional category produced by each speaker.…”
Section: Analysis-by-synthesis With Stochastic Optimizationmentioning
confidence: 99%
“…The subject was first proposed by Akaike [1973], who introduced the principle of maximum entropy as the theoretical basis for model selection, and by Schwarz [1978], who, by developing a similar idea in a Bayesian context, proposed the Bayesian information criterion for model selection. Extensions of these methods include corrections to be used with small sample size [Hurvich and Tsai, 1989] and generalizations of the mentioned criteria [e.g., see Bozdogan, 1987;Konishi and Kitagawa, 1996;Wasserman, 2000]. Only with the recent increase of computer capabilities have other methods been proposed and developed for the nonasymptotic model selection, based on bootstrap [Chung et al, 1996] or on cross-validation [Browne, 2000] techniques.…”
Section: Introductionmentioning
confidence: 99%
“…In other settings, however, these criteria's tendency to err toward non-detection of true predictors may represent a serious limitation. Another concern is the variability of our methods' overoptimism estimates, which cannot be reduced by the technique of Konishi and Kitagawa (1996) since that device is applicable only in the fixed-model case. In ongoing research, we are seeking ways to surmount these limitations.…”
Section: Discussionmentioning
confidence: 99%
“…(See also Konishi and Kitagawa (1996), and the bootstrap model selection criterion of Shao (1996), which is based on prediction error loss rather than likelihood.) Suppose we sample n pairs (y i , x i ) from the data, with replacement, B times, and denote the bth bootstrap data set thus generated by (y * b , X * b ) and the associated MLEs byβ…”
Section: Selection Bias In Best-subset Regressionmentioning
confidence: 99%