1999
DOI: 10.2307/2669940
|View full text |Cite
|
Sign up to set email alerts
|

Parameter Expansion for Data Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
166
0

Year Published

2004
2004
2018
2018

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 111 publications
(166 citation statements)
references
References 0 publications
0
166
0
Order By: Relevance
“…To address these issues, Ghosh & Dunson (2009) use parameter expansion (Liu & Wu, 1999;Gelman, 2006) to induce a heavy-tailed default prior distribution on the loading elements and propose an efficient Gibbs sampler.…”
Section: A Bhattacharya and D B Dunsonmentioning
confidence: 99%
“…To address these issues, Ghosh & Dunson (2009) use parameter expansion (Liu & Wu, 1999;Gelman, 2006) to induce a heavy-tailed default prior distribution on the loading elements and propose an efficient Gibbs sampler.…”
Section: A Bhattacharya and D B Dunsonmentioning
confidence: 99%
“…Perhaps the most well-known subclass of sandwich algorithms in the existing literature is the class of parameter-expansion data augmentation (PXDA) algorithms introduced by Liu and Wu [19]. If there exists a group structure G that acts on Y, then any probability distribution r supported on G would correspond to a PXDA algorithm, and we denote its X-chain operator by P r G .…”
Section: Improving Da Algorithms Using Haar Algorithmsmentioning
confidence: 99%
“…It is implicitly stated in Liu and Wu [19,Section.5] that the Haar algorithm based on a locally compact group G ′ is a special PXDA algorithm on G ′ that corresponds to the left Haar measure r = ν G ′ . (An alternative proof of the statement in the special case where G ′ is a discrete group is provided in the online supplement of this paper.…”
Section: Appendix A: Proof Of Propositionmentioning
confidence: 99%
See 1 more Smart Citation
“…Of course the problem remains critical, for standard HMM theory does not allow for observation spaces of variable dimension. A possible solution can be the use of standard statistical techniques for the treatment of missing data [18] based on the EM algorithm. More interesting would be the learning of models based on hybrid systems composed by different HMMs each representing a state of occlusion.…”
Section: Towards Unsupervised Detectionmentioning
confidence: 99%