2019
DOI: 10.1051/ps/2018023
|View full text |Cite
|
Sign up to set email alerts
|

Optimal compromise between incompatible conditional probability distributions, with application to Objective Bayesian Kriging

Abstract: Models are often defined through conditional rather than joint distributions, but it can be difficult to check whether the conditional distributions are compatible, i.e. whether there exists a joint probability distribution which generates them. When they are compatible, a Gibbs sampler can be used to sample from this joint distribution. When they are not, the Gibbs sampling algorithm may still be applied, resulting in a "pseudo-Gibbs sampler". We show its stationary probability distribution to be the optimal … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Note that an alternative solution (not detailed here) is proposed by Muré, J. [79] to make the full-Bayesian procedure tractable: univariate conditional Jeffreys-rule posterior distributions and pseudo-Gibbs sampler are notably used.…”
Section: Focus On Robustgasp Methodsmentioning
confidence: 99%
“…Note that an alternative solution (not detailed here) is proposed by Muré, J. [79] to make the full-Bayesian procedure tractable: univariate conditional Jeffreys-rule posterior distributions and pseudo-Gibbs sampler are notably used.…”
Section: Focus On Robustgasp Methodsmentioning
confidence: 99%
“…Obviously, the advantages of Tri-training semi-supervised learning are that it can classify with a small amount of labeled corpus and a large number of the unlabelled corpus, the military equipment support information is mostly classified as confidential, and labeled corpus is difficult to obtain. Therefore, the semisupervised learning Tri-training algorithm is applied to train a better classification model on fewer labeled corpus and a large number of unlabelled corpus [29][30][31].…”
Section: ()mentioning
confidence: 99%
“…Similar to what was done earlier, here we will again focus attention on a more widely applicable method, in particular the method that simply consists in making the assumption that the joint density of the parameters θ that most closely corresponds to the set of full conditional densities in equation ( 2) is equal to the limiting density function of a Gibbs sampling algorithm that is based on these conditional densities with some given fixed or random scanning order of the parameters concerned. This approach relates to more specific methods for addressing the general problem of interest that were discussed in Chen, Ip and Wang (2011) and Muré (2019).…”
Section: Finding Compatible Approximations To Incompatible Full Condi...mentioning
confidence: 99%