2005
DOI: 10.3150/bj/1126126768
|View full text |Cite
|
Sign up to set email alerts
|

Bootstrap prediction and Bayesian prediction under misspecified models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 9 publications
0
24
0
Order By: Relevance
“…The bootstrap predictive distribution is obtained by applying the bagging algorithm to the plug-in distribution with the maximum likelihood estimator (MLE). The effectiveness of bootstrap prediction has been studied under the Kullback-Leibler loss Fushiki, 2005;Harris, 1989). It is known that Bayesian prediction is admissible when a proper prior is used (Aitchison, 1975).…”
Section: Introductionmentioning
confidence: 99%
“…The bootstrap predictive distribution is obtained by applying the bagging algorithm to the plug-in distribution with the maximum likelihood estimator (MLE). The effectiveness of bootstrap prediction has been studied under the Kullback-Leibler loss Fushiki, 2005;Harris, 1989). It is known that Bayesian prediction is admissible when a proper prior is used (Aitchison, 1975).…”
Section: Introductionmentioning
confidence: 99%
“…However, the prediction by a learning machine θ usually has a bias and a variance, so that the predictive distribution given by (6) with no bias and the derived (7) seem to be inappropriate. So, we derive (5) from another approach.…”
Section: A Bayesian Bagging Predictionmentioning
confidence: 98%
“…One uses the hyper parameter applicable to several neural networks, such as perceptron and backpropagation networks, where the norm of the connection weights of the network is supposed to reflect the generalization performance and is used as the hyper parameter (see [2], [3]). Another uses the Bayesian bootstrap [4] enabling continuous weights of the data in the bootstrap sample sets instead of discrete weights for ordinary bootstrap, and its application to the bagging is called Bayesian bagging [5] or Bayesian bootstrap prediction [6],…”
Section: Introductionmentioning
confidence: 99%
“…, y Z }, and use the expected model change to approximate the true model change. The use of bootstrap to estimate prediction distribution has been well investigated [35]. Thus, the final EMCM sampling function for linear regression is expressed as…”
Section: Algorithm 1 Emcm For Linear Regressionmentioning
confidence: 99%