2001
DOI: 10.1214/aos/1013203452
|View full text |Cite
|
Sign up to set email alerts
|

Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

9
229
0

Year Published

2005
2005
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 190 publications
(238 citation statements)
references
References 25 publications
9
229
0
Order By: Relevance
“…Our aim is to approximate f0 using multivariate Gaussian RBF‐net. The result essentially extends the approximation result in Lemma 3.1 of [22] from dimension 1 to dimension p0. Thus they share commonalities in the expression of the obtained approximation.…”
Section: Theoretical Supportsupporting
confidence: 82%
“…Our aim is to approximate f0 using multivariate Gaussian RBF‐net. The result essentially extends the approximation result in Lemma 3.1 of [22] from dimension 1 to dimension p0. Thus they share commonalities in the expression of the obtained approximation.…”
Section: Theoretical Supportsupporting
confidence: 82%
“…Starting with error terms, a possibility is to assume a flexible parametric family for errors, for example using normal mixtures. 30 Ghosal and Van der Vaart (2001 provide results on the ability of normal mixtures to approximate unknown densities. Imposing a flexible parametric structure should not be seen as a severe limitation if the conditions of the identification theorems are satisfied.…”
Section: Densitiesmentioning
confidence: 99%
“…For the Gaussian kernel and the Dirichlet Process prior of the mixing distribution, asymptotic properties, such as consistency, and rate of convergence of the posterior distribution based on kernel mixture priors were established by Ghosal, Ghosh, and Ramamoorthi (1999), Tokdar (2006), and van der Vaart (2001, 2007). Similar results for Dirichlet mixture of Bernstein polynomials were shown by Petrone and Wasserman (2002), Ghosal (2001) and Kruijer and van der Vaart (2008).…”
Section: Bayesian Nonparametric Copula Kernel Mixturementioning
confidence: 73%
“…For a subjective Bayesian who dispenses of the notion of a true parameter, consistency has an important connection with the stability of predictive distributions of future observations -a consistent posterior will tend to agree with calculations of other Bayesians using a different prior distribution in the sense of weak topology. For an objective Bayesian who assumes the existence of an unknown true model, consistency can be thought of as a validation of the Bayesian method as approaching the mechanism used to generate the data(Ghosal and van der Vaart, 2011). In this sense, it is desirable to establish the conditions under which infinite mixture models have a consistent posterior, as the use of models that are inconsistent can be regarded as ill-advised.…”
mentioning
confidence: 99%