Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1146
|View full text |Cite
|
Sign up to set email alerts
|

Lifted Rule Injection for Relation Embeddings

Abstract: Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
100
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 101 publications
(105 citation statements)
references
References 23 publications
2
100
0
Order By: Relevance
“…(?, r , o) with right answer s, we first fit subject entity position with each entity e ∈ E and thus get a set of triples T sub jectpr edict ion = {(e, r , o)|e ∈ E}. Then we calculate the score for each triple in T sub jectpr edict ion according to Equation (6) and rank their scores in descending order. Thus the entity e in (e, r , o) with a higher rank is a more possible prediction.…”
Section: Embedding Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…(?, r , o) with right answer s, we first fit subject entity position with each entity e ∈ E and thus get a set of triples T sub jectpr edict ion = {(e, r , o)|e ∈ E}. Then we calculate the score for each triple in T sub jectpr edict ion according to Equation (6) and rank their scores in descending order. Thus the entity e in (e, r , o) with a higher rank is a more possible prediction.…”
Section: Embedding Evaluationmentioning
confidence: 99%
“…[21] adds constrains for relation embeddings participating in path rules. [6] maps entity-tuple embedding into an approximately Boolean space and encourages a partial ordering over relation embedding based on implication rules. [8] adds approximate entailment constraints on relation representations.…”
Section: Embedding Learningmentioning
confidence: 99%
“…An example of this are the Logic Tensor Networks in [46], where the authors show that encoding prior knowledge in symbolic form allows for better learning results on fewer training data, as well as more robustness against noise. A similar example is given in [47], where knowledge graphs are successfully used as priors in a scene description task, and in [48] where logical rules are used as background knowledge for a gradient descent learning task in a high-dimensional real-valued vector space.…”
Section: Learning With Symbolic Information As a Priormentioning
confidence: 99%
“…The Skip-Gram Model (Demeester et al, 2016) was used to train the continuous vectors by treating each set of 1 or 3 consecutive amino acids (1-gram or 3-gram) as a unit. The choice of the 3-gram model, also called ProtVec, was selected according to an earlier study (Asgari and Mofrad, 2015).…”
Section: Peptide Embedding Layermentioning
confidence: 99%