A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base in conjunctive normal form (CNF) with weights associated to each model according to the network parameters. Given this CNF, computing the probability of some evidence becomes a matter of summing the weights of all CNF models consistent with the evidence. A number of variations on this approach have appeared in the literature recently, that vary across three orthogonal dimensions. The first dimension concerns the specific encoding used to convert a Bayesian network into a CNF. The second dimensions relates to whether weighted model counting is performed using a search algorithm on the CNF, or by compiling the CNF into a structure that renders WMC a polytime operation in the size of the compiled structure. The third dimension deals with the specific properties of network parameters (local structure) which are captured in the CNF encoding. In this paper, we discuss recent work in this area across the above three dimensions, and demonstrate empirically its practical importance in significantly expanding the reach of exact probabilistic inference. We restrict our discussion to exact inference and model counting, even though other proposals have been extended for approximate inference and approximate model counting.
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose Primula-generated propositional instances have thousands of variables, and whose jointrees have clusters with hundreds of variables.
Abstract-We present in this paper a case study of the probabilistic approach to model-based diagnosis. Here, the diagnosed system is a real-world electrical power system (EPS), i.e., the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. Our probabilistic approach is formally well founded and based on Bayesian networks (BNs) and arithmetic circuits (ACs). We pay special attention to meeting two of the main challenges often associated with real-world application of model-based diagnosis technologies: model development and real-time reasoning. To address the challenge of model development, we develop a systematic approach to representing EPSs as BNs, supported by an easy-to-use specification language. To address the real-time reasoning challenge, we compile BNs into ACs. AC evaluation (ACE) supports real-time diagnosis by being predictable, fast, and exact. In experiments with the ADAPT BN, which contains 503 discrete nodes and 579 edges and produces accurate results, the time taken to compute the most probable explanation using ACs has a mean of 0.2625 ms and a standard deviation of 0.2028 ms. In comparative experiments, we found that, while the variable elimination and join tree propagation algorithms also perform very well in the ADAPT setting, ACE was an order of magnitude or more faster.
Abstract. Recent algorithms for model counting and compilation work by decomposing a CNF into syntactically independent components through variable splitting, and then solving the components recursively and independently. In this paper, we observe that syntactic component analysis can miss decomposition opportunities because the syntax may hide existing semantic independence, leading to unnecessary variable splitting. Moreover, we show that by applying a limited resolution strategy to the CNF prior to inference, one can transform the CNF to syntactically reveal such semantic independence. We describe a general resolution strategy for this purpose, and a more specific one that utilizes problemspecific structure. We apply our proposed techniques to CNF encodings of Bayesian networks, which can be used to answer probabilistic queries through weighted model counting and/or knowledge compilation. Experimental results demonstrate that our proposed techniques can have a large effect on the efficiency of inference, reducing time and space requirements significantly, and allowing inference to be performed on many CNFs that exhausted resources previously.
We analyze, in this work, the performance of a recently introduced weighted Max-SAT solver, Clone, in the Max-SAT evaluation 2007. Clone utilizes a novel bound computation based on formula compilation that allows it to search in a reduced search space. We study how additional techniques from the SAT and Max-SAT literature affect the performance of Clone on problems from the evaluation. We then perform further investigations on factors that may affect the performance of leading Max-SAT solvers. We empirically identify two properties of weighted Max-SAT problems that can be used to adjust the difficulty level of the problems with respect to the considered solvers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.