2006
DOI: 10.1214/009053606000000155
|View full text |Cite
|
Sign up to set email alerts
|

Improved minimax predictive densities under Kullback–Leibler loss

Abstract: Let X|µ ∼ Np(µ, vxI) and Y |µ ∼ Np(µ, vyI) be independent pdimensional multivariate normal vectors with common unknown mean µ. Based on only observing X = x, we consider the problem of obtaining a predictive densityp(y|x) for Y that is close to p(y|µ) as measured by expected Kullback-Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive densitypU(y|x) under the uniform prior πU(µ) ≡ 1, which is best invariant and minimax. We show that any Bayes predictive density will be minimax i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
107
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 88 publications
(112 citation statements)
references
References 24 publications
5
107
0
Order By: Relevance
“…Our results can be seen as a substantial generalization of George, Liang and Xu (2006) who considered the special case of this problem when X ∼ N m (µ, σ 2 x I) and Y ∼ N m (µ, σ 2 y I), where µ is the common unknown multivariate normal mean. Moving further away from this common mean setup, we proceed in Section 3 to extend these results to the setting where only a subset of the p predictors is considered to be potentially irrelevant.…”
Section: Introductionmentioning
confidence: 66%
“…Our results can be seen as a substantial generalization of George, Liang and Xu (2006) who considered the special case of this problem when X ∼ N m (µ, σ 2 x I) and Y ∼ N m (µ, σ 2 y I), where µ is the common unknown multivariate normal mean. Moving further away from this common mean setup, we proceed in Section 3 to extend these results to the setting where only a subset of the p predictors is considered to be potentially irrelevant.…”
Section: Introductionmentioning
confidence: 66%
“…Proposition 1 is without an assumption concerning twice differentiablility of π and is a slight generalization of the results in George et al (2006).…”
Section: Minimaxity Of Bayes Estimators and Bayesian Predictive Densimentioning
confidence: 74%
“…We extended the Bayesian shrinkage prediction method for vector-variate Normal distributions of Komaki (2001) and George et al (2006) to matrixvariate case. Then, we can consider further extensions to tensor-variate case.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations