1996
DOI: 10.1080/00949659608811724
|View full text |Cite
|
Sign up to set email alerts
|

Reparameterizing the generalized linear model to accelerate gibbs sampler convergence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
75
0
1

Year Published

1999
1999
2016
2016

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(76 citation statements)
references
References 8 publications
0
75
0
1
Order By: Relevance
“…Specifically, since our consequentiality responses are ordinal, our model must contend with the estimation of cutpoint values. It is well-documented in the literature that standard Gibbs sampling schemes in such models can suffer from very poor mixing, particularly in moderately large data sets, thus producing imprecise and potentially inaccurate posterior inference [e.g., Cowles (1996), Nandram and Chen (1996)]. 3 Our proposed posterior simulator offers significant improvements by sampling the cutpoints, latent willingness to pay and latent consequentiality variables in a single step rather than sampling each component from its corresponding complete posterior conditional distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, since our consequentiality responses are ordinal, our model must contend with the estimation of cutpoint values. It is well-documented in the literature that standard Gibbs sampling schemes in such models can suffer from very poor mixing, particularly in moderately large data sets, thus producing imprecise and potentially inaccurate posterior inference [e.g., Cowles (1996), Nandram and Chen (1996)]. 3 Our proposed posterior simulator offers significant improvements by sampling the cutpoints, latent willingness to pay and latent consequentiality variables in a single step rather than sampling each component from its corresponding complete posterior conditional distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, following Nandram and Chen (1996) and Chen and Dey (2000), we assume that γ C−1 = 1; this is in addition to Chen and Dey (2000) propose the following transformation of cutoff points:…”
Section: Bayesian Ordered Probit Modelmentioning
confidence: 99%
“…Nandram and Chen (1996) and Chen and Dey (2000) provide an identification restriction based on the reparameterization of the cutoff points, that is, γ C−1 = 1. See also Jeliazkov et al (2009, Section 2.2).…”
Section: Bayesian Ordered Probit Modelmentioning
confidence: 99%
See 2 more Smart Citations