2010
DOI: 10.1016/j.jmva.2010.06.012
|View full text |Cite
|
Sign up to set email alerts
|

The L1-consistency of Dirichlet mixtures in multivariate Bayesian density estimation

Abstract: a b s t r a c tDensity estimation, especially multivariate density estimation, is a fundamental problem in nonparametric inference. In the Bayesian approach, Dirichlet mixture priors are often used in practice for such problems. However, the asymptotic properties of such priors have only been studied in the univariate case. We extend the L 1 -consistency of Dirichlet mixutures in the multivariate density estimation setting. We obtain such a result by showing that the Kullback-Leibler property of the prior hold… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Thus, for L, K = ∞, the covariance of the proposed model satisfies the criteria of the Karhunen-Loève Theorem (Alexanderian (2015)) and spans the covariance structure of all squared-integrable stochastic processes with continuous covariance functions. Under suitable regularity conditions, posterior consistency of the proposed model holds (Wu and Ghosal (2010), Ghosal and van der Vaart (2017)).…”
Section: Low-rank Student's T Process (Ltp) Ltps Are Richer Thanmentioning
confidence: 98%
“…Thus, for L, K = ∞, the covariance of the proposed model satisfies the criteria of the Karhunen-Loève Theorem (Alexanderian (2015)) and spans the covariance structure of all squared-integrable stochastic processes with continuous covariance functions. Under suitable regularity conditions, posterior consistency of the proposed model holds (Wu and Ghosal (2010), Ghosal and van der Vaart (2017)).…”
Section: Low-rank Student's T Process (Ltp) Ltps Are Richer Thanmentioning
confidence: 98%
“…With advancements in computing and general inference schemes such as Markov chain Monte Carlo (MCMC), BNP mixtures can be easily implemented. Moreover, well-established theory validates the use of BNP mixtures for asymptotically optimal density estimation [141144]. Together these properties and developments have led to the huge growth and adoption of BNP mixtures, especially DP mixtures, for a variety of applications in statistics and machine learning in the twenty-first century.…”
Section: Estimating the Number Of Clusters And Model Misspecificationmentioning
confidence: 99%