Stochastic simulation approaches perform probabilistic inference in Bayesian networks by estimating the probability of an event based on the frequency that that· event occurs in a set of simulation trials. This paper describes the evidence weighting mechanism, for augmenting the lo � ic sampling stochastic simulation algo rithm l5]. Evidence weighting modifies the logic sampling algorithm by weighting each simula tion trial by the likelihood of a network's evi dence given the sampled state node values for that trial. We also describe an enhancement to the basic algorithm which uses the eviden tial integration technique [2]. A comparison of the basic evidence weighting mechanism with the Markov blanket algorithm [8], the logic sampling algorithm, and the evidence integra tion algorithm is presented. The comparison is aided by analyzing the performance of the algorithms in a simple example network.
Backward simulation is an approximate inference technique for Bayesian belief networks. It differs from existing simulation methods in that it starts simulation from the known evidence and works backward (i.e., contrary to the direction of the arcs). The technique's focus on the evidence leads to improved convergence in situations where the posterior beliefs are dominated by the evidence rather than by the prior probabilities. Since this class of situations is large, the technique may make practical the application of approximate inference in Bayesian belief networks to many real�world problems.
Information retrieval (IR) is the identification of documents or other units of information in a collection that are relevant to a particular information need. An information need is a set of questions to which someone would like to find an answer. Here are some examples of IR tasks: finding articles in the New York
Times
that discuss the Iran-Contra affair; searching the recent postings in a Usenet newsgroup for references to a particular model of personal computer; finding the entries referring to butterflies in an online CD-ROM encyclopedia.
Research on Symbolic Probabilistic Inference (SPI) [2, 3) has provided an algorithm for re solving general queries in Bayesian networks. SPI applies the concept of dependency directed backward search to probabilistic in ference, and is incremental with respect to both queries and observations. Unlike tra ditional Bayesian network inferencing algo rithms, SPI algorithm is goal directed, per forming only those calculations that are re quired to respond to queries. Research to date on SPI applies to Bayesian networks with discrete-valued variables and does not address variables with continuous values. In this paper1, we extend the SPI algorithm to handle Bayesian networks made up of continuous variables where the relationships between the variables are restricted to be "linear gaussian". We call this variation of the SPI algorithm, SPI Continuous (SPIC ) . SPIC modifies the three basic SPI opera tions: multiplication, summation, and sub stitution. However, SPIC retains the frame work of the SPI algorithm, namely building the search tree and recursive query mecha nism and therefore retains the goal-directed and incrementality features of SPI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.