Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks, such as computing the marginals, given evidence and learning from (partial) interpretations, have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on the conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce inference tasks to well-studied tasks, such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs expectation-maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state of the art in probabilistic logic programming, and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
We review Logical Bayesian Networks, a language for probabilistic logical modelling, and discuss its relation to Probabilistic Relational Models and Bayesian Logic Programs. Probabilistic Logical ModelsProbabilistic logical models are models combining aspects of probability theory with aspects of Logic Programming, first-order logic or relational languages. Recently a variety of languages to describe such models has been introduced. For some languages techniques exist to learn such models from data. Two examples are Probabilistic Relational Models (PRMs) [4] and Bayesian Logic Programs (BLPs) [5]. These two languages are probably the most popular and well-known in the Relational Data Mining community. We introduce a new language, Logical Bayesian Networks (LBNs) [2], that is strongly related to PRMs and BLPs yet solves some of their problems with respect to knowledge representation (related to expressiveness and intuitiveness).PRMs, BLPs and LBNs all follow the principle of Knowledge Based Model Construction: they offer a language that can be used to specify general probabilistic logical knowledge and they provide a methodology to construct a propositional model based on this knowledge when given a specific problem domain. We focus on the case where the propositional model is a Bayesian network. The idea is to combine the strengths of Bayesian networks and of first-order logic. Logical Bayesian Networks (LBNs)LBNs use a Logic Programming based language. Because we want to distinguish between logical (deterministic) knowledge and probabilistic knowledge, LBNs use two different sets of predicates: ordinary logical predicates and probabilistic 'predicates'. The latter are different from ordinary predicates in that they have an associated range and are used for representing random variables: e.g. if ranking/1 and grade/2 are probabilistic predicates, the atoms ranking(joe) and grade(joe,ai) represent random variables indicating the ranking of 'joe' and the grade of 'joe' for 'ai' and hence can take any value that a ranking or grade can take (so not simply true or false as for logical atoms).An LBN defines a mapping between interpretations for the logical predicates (i.e. descriptions of the problem domain) and Bayesian networks. An LBN consists of three major components. The first is a set of clauses called the random variable declarations: e.g. the clause random(iq(S)) <-student(S) specifies that iq(S) is a random variable in the Bayesian network if S is a student. The second is a set of clauses called the conditional dependency clauses: e.g. the clause ranking(S) | grade(S,C) <-takes(S,C) specifies that if student S takes course C then ranking(S) conditionally depends on grade(S, C), i.e. there is a directed edge in the Bayesian network from ranking(S) to grade(S, C). The third component is a set of logical CPDs, a more expressive counterpart of ordinary Conditional Probability Distributions (CPDs), and is used to determine the CPDs in the Bayesian network.In designing LBNs we tried to unravel the differen...
In this paper we describe the application of data mining methods for predicting the evolution of patients in an intensive care unit. We discuss the importance of such methods for health care and other application domains of engineering. We argue that this problem is an important but challenging one for the current state of the art data mining methods and explain what improvements on current methods would be useful. We present a promising study on a preliminary data set that demonstrates some of the possibilities in this area.
Lifted probabilistic inference algorithms exploit regularities in the structure of graphical models to perform inference more efficiently. More specifically, they identify groups of interchangeable variables and perform inference once per group, as opposed to once per variable. The groups are defined by means of constraints, so the flexibility of the grouping is determined by the expressivity of the constraint language. Existing approaches for exact lifted inference use specific languages for (in)equality constraints, which often have limited expressivity. In this article, we decouple lifted inference from the constraint language. We define operators for lifted inference in terms of relational algebra operators, so that they operate on the semantic level (the constraints' extension) rather than on the syntactic level, making them language-independent. As a result, lifted inference can be performed using more powerful constraint languages, which provide more opportunities for lifting. We empirically demonstrate that this can improve inference efficiency by orders of magnitude, allowing exact inference where until now only approximate inference was feasible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.