Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/160
|View full text |Cite
|
Sign up to set email alerts
|

Induction of Interpretable Possibilistic Logic Theories from Relational Data

Abstract: The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 0 publications
0
11
0
Order By: Relevance
“…Indeed having a stratified set of first-order logic rules as an hypothesis in ILP is of interest for learning both rules covering normal cases and more specific rules for exceptional cases [62]. A different approach to the induction of possibilistic logic theories is proposed in [51]. It relies on the fact that any set of formulas in Markov logic [61] can be exactly translated into possibilistic logic formulas [50,46].…”
Section: Discussionmentioning
confidence: 99%
“…Indeed having a stratified set of first-order logic rules as an hypothesis in ILP is of interest for learning both rules covering normal cases and more specific rules for exceptional cases [62]. A different approach to the induction of possibilistic logic theories is proposed in [51]. It relies on the fact that any set of formulas in Markov logic [61] can be exactly translated into possibilistic logic formulas [50,46].…”
Section: Discussionmentioning
confidence: 99%
“…Relational examples The learning setting considered in this paper follows the one that was introduced in [9,8]. The central notion is that of a relational example (or simply example if there is no cause for confusion),which is defined as a pair (A, C), with C a set of constants and A a set of ground atoms which only use constants from C. A relational example is intended to provide a complete description of a possible world, hence any ground atom over C which is not contained in A is implicitly assumed to be false.…”
Section: Relational Learning Settingmentioning
confidence: 99%
“…Clearly, in order to provide any guarantees on the accuracy of these predictions, we need to make (simplifying) assumptions about how the training structures are obtained. In this paper, we follow the setting from [9,8], where it is assumed that these structures are all obtained as fragments induced by domain elements sampled uniformly without replacement.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, first, we describe two different types of relational marginals, which differ in the kinds of statistics that are provided. 1 The first type is based on relational marginal distributions (Kuželka, Davis, and Schockaert 2017) and the second is based on Halpern-style random substitution semantics (Bacchus et al 1992). Second, for both types of statistics, we establish a relational counterpart of the duality between maximum-likelihood estimation and max-entropy marginal problems.…”
Section: Introductionmentioning
confidence: 99%