2008
DOI: 10.1017/s0266466608080213
|View full text |Cite
|
Sign up to set email alerts
|

Predictive Density Estimation for Multiple Regression

Abstract: Suppose we observe X ∼ N m (Aβ, σ 2 I) and would like to estimate the predictive density

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
14
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 10 publications
2
14
0
Order By: Relevance
“…We note that [4] developed minimaxity and dominance conditions on prior distributions for the predictive regression problem with changeable covariance independently of our study. They also proposed a shrinkage prediction where only a subset of the regression coefficients are close to zero.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…We note that [4] developed minimaxity and dominance conditions on prior distributions for the predictive regression problem with changeable covariance independently of our study. They also proposed a shrinkage prediction where only a subset of the regression coefficients are close to zero.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…However, these two covariance structures are generally different when we consider the regression problem (Kobayashi & Komaki, 2008;George & Xu, 2008). We extend the results for this general situation to matrix-variate case.…”
Section: Singular Value Shrinkage Priors Depending On the Future Covamentioning
confidence: 57%
“…George et al (2006) generalized this result and proved that Bayesian predictive densities based on superharmonic priors dominate those based on the Jeffreys prior under the Kullback-Leibler risk. Next, Kobayashi & Komaki (2008) and George & Xu (2008) considered the cases where Σ is not necessarily proportional to Σ. Bayesian predictive densities based on superharmonic priors dominate those based on the uniform prior under the Kullback-Leibler risk also in this general situation. Kobayashi & Komaki (2008) and George & Xu (2008) applied their results to linear regression problems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…George [17][18][19] showed that such a multiple shrinkage estimators can adaptively shrink toward the point or subspace most favored by the data. George and Xu [21] used the same idea to construct multiple shrinkage Bayesian predictive densities for linear regression models. Another possible extension of this work is to consider different ''linear coefficients'' for the mean and the variance of the empirical Bayes predictive densities.…”
Section: Discussionmentioning
confidence: 99%