2017
DOI: 10.1007/978-3-319-63342-8_9
|View full text |Cite
|
Sign up to set email alerts
|

Learning Predictive Categories Using Lifted Relational Neural Networks

Abstract: Abstract. Lifted relational neural networks (LRNNs) are a flexible neuralsymbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, properties and relations need to be jointly induced.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2

Relationship

4
1

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…Here e h and e t are the embeddings of the entities h and t, and R r is a diagonal matrix representing the relation r. The ComplEx model [43] is an extension of [48] in the complex space. A different strategy consists in learning latent soft clusters of predicates to predict missing facts in relational data, for example by using Markov logic network [24] or by applying neural network models [36,40]. Several rule-based approaches have also been proposed, where observed regularities in the given knowledge graph are summarized as a weighted set of rules, which is then used to derive plausible missing facts.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Here e h and e t are the embeddings of the entities h and t, and R r is a diagonal matrix representing the relation r. The ComplEx model [43] is an extension of [48] in the complex space. A different strategy consists in learning latent soft clusters of predicates to predict missing facts in relational data, for example by using Markov logic network [24] or by applying neural network models [36,40]. Several rule-based approaches have also been proposed, where observed regularities in the given knowledge graph are summarized as a weighted set of rules, which is then used to derive plausible missing facts.…”
Section: Related Workmentioning
confidence: 99%
“…For example, [11] proposed a system inspired by inductive logic programming, while [44] introduced statistical schema induction to mine association rule from RDF data and then generate ontologies. More recently, [40] used so-called Lifted Relational Neural Networks to learn rules in an implicit way. In [31], meta-rules were found automatically by meta-interpretive learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The structure learning algorithm will create LRNNs having a generic "stacked" structure which we now describe. First, there are rules that define d new predicates, representing soft clusters [17] of unary predicates from the dataset. These can be thought of as the first layer of the LRNN, where the weighted facts from the dataset comprise the zeroth layer.…”
Section: Structure Of the Learned Lrnnsmentioning
confidence: 99%
“…In Figure 5, we plot the evolution of these embeddings as new rules are being added by the structure learning algorithm. The left panel of Figure 5 displays the evolution of the embeddings of atom types after these have been pre-trained using an unsupervised method which was originally used for statistical predicate invention in [17]. The right panel of the same figure displays the evolution of the embeddings when starting from random initialization without any unsupervised pre-training.…”
Section: Molecular Datasetsmentioning
confidence: 99%