2007
DOI: 10.1007/978-3-540-72665-4_29
|View full text |Cite
|
Sign up to set email alerts
|

Improving Importance Sampling by Adaptive Split-Rejection Control in Bayesian Networks

Abstract: Abstract. Importance sampling-based algorithms are a popular alternative when Bayesian network models are too large or too complex for exact algorithms. However, importance sampling is sensitive to the quality of the importance function. A bad importance function often leads to much oscillation in the sample weights, and, hence, poor estimation of the posterior probability distribution. To address this problem, we propose the adaptive split-rejection control technique to adjust the samples with extremely large… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…However, a biased estimate would be obtained due to the unevenly distributed high-importance weights by the SIR method; that is, a few extremely large weights dominate the entire resampling process, so it is highly unlikely that other small weighted samples will be generated. To avoid the concentration of posterior sets caused by large weights, a wider sample range was explored by splitting the larger weights into smaller ones, which also proved that the results remained unbiased [49]. The process of inferring a posterior based on the ABM method is shown in Figure 1 and listed as follows:…”
Section: An Adaptive Bayesian Melding Methodsmentioning
confidence: 99%
“…However, a biased estimate would be obtained due to the unevenly distributed high-importance weights by the SIR method; that is, a few extremely large weights dominate the entire resampling process, so it is highly unlikely that other small weighted samples will be generated. To avoid the concentration of posterior sets caused by large weights, a wider sample range was explored by splitting the larger weights into smaller ones, which also proved that the results remained unbiased [49]. The process of inferring a posterior based on the ABM method is shown in Figure 1 and listed as follows:…”
Section: An Adaptive Bayesian Melding Methodsmentioning
confidence: 99%
“…Therefore, they mirror the Monte Carlo and Markov chain Monte Carlo approaches in the literature: rejection sampling, importance sampling, and sequential Monte Carlo among others. Two state-of-the-art examples are the adaptive importance sampling (AIS-BN) scheme [46] and the evidence pre-propagation importance sampling (EPIS-BN) [47].…”
Section: Inferencementioning
confidence: 99%
“…As mentioned above, batch resampling techniques based on rejection control (Liu, Chen, and Wong 1998;Yuan and Druzdzel 2007b) or sequential Monte Carlo (SMC) (Doucet, Godsill, and Andrieu 2000;Fan, Xu, and Shelton 2010), i.e. particle filtering, can mitigate the SIS weight variance problem, but they can lead to reduced particle diversity, especially when many resampling iterations are required.…”
Section: Related Workmentioning
confidence: 99%