2012
DOI: 10.1007/s10994-012-5307-6
|View full text |Cite
|
Sign up to set email alerts
|

Learning compact Markov logic networks with decision trees

Abstract: Statistical-relational learning combines logical syntax with probabilistic methods. Markov Logic Networks (MLNs) are a prominent model class that generalizes both firstorder logic and undirected graphical models (Markov networks). The qualitative component of an MLN is a set of clauses and the quantitative component is a set of clause weights. Generative MLNs model the joint distribution of relationships and attributes. A state-ofthe-art structure learning method is the moralization approach: learn a set of di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Our code and datasets are available on the world-wide web (Khosravi et al 2012). We made use of the following existing implementations.…”
Section: Discussionmentioning
confidence: 99%
“…Our code and datasets are available on the world-wide web (Khosravi et al 2012). We made use of the following existing implementations.…”
Section: Discussionmentioning
confidence: 99%
“…However to our knowledge, no implementations of such structure learning algorithms for directed graphical models are available. Our system builds on the state-of-the-art Bayes net learner for relational data, whose code is available at [6]. Implementations exist for other types of graphical models, specifically Markov random fields (undirected models) [2] and dependency networks (directed edges with cycles allowed) [10].…”
Section: Related Workmentioning
confidence: 99%
“…In terms of graphical models, an MLN is a template for a Markov random field, with a log-linear likelihood function that is the weighted sum of counts of features defined by the first-order formulas. A state-of-the-art approach to learning the clauses in an MLN is to first learn a set of decision trees and then convert each branch of each decision tree to an MLN clause [36,33]. The weights of the clauses are obtained from probability estimation trees by using the log-conditional probabilities associated with a leaf [36], and from regression trees by using the regression weights [33].…”
Section: Model Conversionsmentioning
confidence: 99%
“…A state-of-the-art approach to learning the clauses in an MLN is to first learn a set of decision trees and then convert each branch of each decision tree to an MLN clause [36,33]. The weights of the clauses are obtained from probability estimation trees by using the log-conditional probabilities associated with a leaf [36], and from regression trees by using the regression weights [33]. The example rule from Figure 4 (right) would induce the MLN clause RA(S , P ), TeachingA(P , 1 ), Intelligence(S , 2 ) : w = ln(100 %).…”
Section: Model Conversionsmentioning
confidence: 99%
See 1 more Smart Citation