2014
DOI: 10.1007/s11222-014-9519-4
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive truncation method for inference in Bayesian nonparametric models

Abstract: Many exact Markov chain Monte Carlo algorithms have been developed for posterior inference in Bayesian nonparametric models which involve infinitedimensional priors. However, these methods are not generic and special methodology must be developed for different classes of prior or different models. Alternatively, the infinite-dimensional prior can be truncated and standard Markov chain Monte Carlo methods used for inference. However, the error in approximating the infinite-dimensional posterior can be hard to c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 58 publications
0
23
0
Order By: Relevance
“…After the first break, the next break, 2 k = , has the probability ( ) On the other hand, when the process of breaking continues until the infinite number of clusters is created the model become nonparametric with infinite mixture states/components [21]. However, the literature point out that working with the infinite dimensional posterior distribution is computationally expensive [22].…”
Section: Dirichlet Process Mixtures Of Generalized Linear Modelsmentioning
confidence: 99%
“…After the first break, the next break, 2 k = , has the probability ( ) On the other hand, when the process of breaking continues until the infinite number of clusters is created the model become nonparametric with infinite mixture states/components [21]. However, the literature point out that working with the infinite dimensional posterior distribution is computationally expensive [22].…”
Section: Dirichlet Process Mixtures Of Generalized Linear Modelsmentioning
confidence: 99%
“…Journal of Econometrics (2018) Griffin (2016) which adaptively truncates the infinite sum in the numerator and denominator and tends to avoid large truncation errors in the posterior. We define a truncation of the infinite model in Eq.…”
Section: Inference In Bayesian Np-varmentioning
confidence: 99%
“…The adaptive truncation method of Griffin (2016) uses an MCMC algorithm to sample from the posterior, π K 0 (θ 1:K 0 , η 1:K 0 |y), for a user-defined starting value, K 0 , and then uses a sequential Monte Carlo method to sample from the sequence of posterior distributions…”
Section: Inference In Bayesian Np-varmentioning
confidence: 99%
“…Recently, an a priori truncation method has been introduced by Griffin (2013), who proposes an adaptive truncation algorithm for posterior inference with priors either of stick-breaking or NRMI type.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, as pointed out in Griffin (2013), there are two motivations for truncation: the study of the properties of the prior distribution, which is not our primary goal, and simpler calculation of posterior inference using these priors. Instead, with regard to theoretical results on approximation of Dirichlet processes based on the distributional equation for a DP given in Sethuraman (1994), we refer here to Muliere and Tardella (1998) and Favaro et al (2012).…”
Section: Introductionmentioning
confidence: 99%