2015
DOI: 10.1214/14-aos1251
|View full text |Cite
|
Sign up to set email alerts
|

Exact minimax estimation of the predictive density in sparse Gaussian models

Abstract: We consider estimating the predictive density under Kullback–Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive natur… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
11
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 29 publications
1
11
0
Order By: Relevance
“…Risk at origin. The risk at the origin for our cluster prior based Bayes prde is asymptotically much smaller than the risk for the thresholding based risk diversified prde of Mukherjee and Johnstone [2015]. As such, comparing equation ( 51) in the aforementioned paper with the following result, it follows that any thresholding based minimax optimal prde will have much higher risk at the origin than the cluster prior based Bayes prde.…”
Section: Proof Of Theoremmentioning
confidence: 76%
See 1 more Smart Citation
“…Risk at origin. The risk at the origin for our cluster prior based Bayes prde is asymptotically much smaller than the risk for the thresholding based risk diversified prde of Mukherjee and Johnstone [2015]. As such, comparing equation ( 51) in the aforementioned paper with the following result, it follows that any thresholding based minimax optimal prde will have much higher risk at the origin than the cluster prior based Bayes prde.…”
Section: Proof Of Theoremmentioning
confidence: 76%
“…When r > r 0 , then K = 1 and so, the above result directly imply B(π C n )/R * (Θ 0 [s n ]) → 1 as n → ∞. The condition s n → ∞ ensures that the prior concentrates on the parametric space Θ 0 [s n ] defined in page 2 of the main paper (see Theorem 1B of Mukherjee and Johnstone [2015] for details) and thus is least favorable in this case.…”
Section: Proof Of Theoremmentioning
confidence: 84%
“…We discussed the asymptotic minimaxity and the adaptivity for the ellipsoidal parameter space. There are many other types of parameter space in highdimensional and nonparametric models; for example, Mukherjee and Johnstone [12] discussed the asymptotically minimax prediction in high-dimensional Gaussian sequence model under sparsity. For future work, we should focus on the asymptotically minimax adaptive predictive distributions in other parameter spaces.…”
Section: Discussionmentioning
confidence: 99%
“…Relatively little is known about constructing predictive densities in high dimensions. [41,40] construct an asymptotically minimax predictive density for sparse Gaussian models. [46] obtained an asymptotically minimax predictive density for nonparametric Gaussian regression models under Sobolev constraints; thereafter, [49] obtained an adaptive minimax predictive density for these models.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Conversely, little is known about predictive density for statistical models in high dimensions. In prediction using sparse high-dimensional Gaussian models, [41,40] construct several predictive densities (including a Bayes predictive density) superior to all plug-in predictive densities.…”
Section: Introductionmentioning
confidence: 99%