2011
DOI: 10.1007/978-3-642-23780-5_47
|View full text |Cite
|
Sign up to set email alerts
|

Learning the Parameters of Probabilistic Logic Programs from Interpretations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
70
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 50 publications
(70 citation statements)
references
References 13 publications
0
70
0
Order By: Relevance
“…ProbLog [6] which is another probabilistic extension of Prolog also employs a BDD-based parameter learning algorithm [7]. However, variational Bayesian inference for ProbLog has not yet been proposed to our knowledge.…”
Section: Conclusion and Related Workmentioning
confidence: 99%
“…ProbLog [6] which is another probabilistic extension of Prolog also employs a BDD-based parameter learning algorithm [7]. However, variational Bayesian inference for ProbLog has not yet been proposed to our knowledge.…”
Section: Conclusion and Related Workmentioning
confidence: 99%
“…k-Best's ability to compile small, approximate BDDs is especially useful in the context of DTProbLog and parameter learning for ProbLog programs (Gutmann et al 2008(Gutmann et al , 2011. These algorithms build and store BDDs for a large number of queries.…”
Section: Examplementioning
confidence: 99%
“…A further disadvantage of k-best is found in parameter learning for ProbLog programs (Gutmann et al 2008(Gutmann et al , 2011 and the use of ProbLog in solving probabilistic decision problems ( Van den Broeck et al 2010). An example of such a decision problem is targeted advertising in social networks: in order to reduce advertising cost, one wishes to identify a small subset of nodes in a social network such that the expected number of people reached is maximized.…”
mentioning
confidence: 99%
“…In the present paper and [19], we have discussed a gradient descent approach to parameter learning for ProbLog in which the examples are ground facts together with their target probability. In [57], an approach to learning from interpretations based on an EM algorithm is introduced. There, each example specifies a possible world, that is, a set of ground facts together with their truth value.…”
Section: Related Work In Statistical Relational Learningmentioning
confidence: 99%