2012
DOI: 10.1162/coli_a_00085
|View full text |Cite
|
Sign up to set email alerts
|

Learning Entailment Relations by Global Graph Structure Optimization

Abstract: Identifying entailment relations between predicates is an important part of applied semantic inference. In this article we propose a global inference algorithm that learns such entailment rules. First, we define a graph structure over predicates that represents entailment relations as directed edges. Then, we use a global transitivity constraint on the graph to learn the optimal set of edges, formulating the optimization problem as an Integer Linear Program. The algorithm is applied in a setting where, given a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 30 publications
(31 citation statements)
references
References 35 publications
0
31
0
Order By: Relevance
“…Finally, while the performance of the predicate entailment component reflects the current stateof-the-art (Berant et al, 2012;Han and Sun, 2016), the performance on entity entailment is much worse than the current state-of-the-art in this task as measured on common lexical inference test sets. We conjecture that this stems from the nature of the entities in our dataset, consisting of both named entities and common nouns, many of which are multi-word expressions, whereas most work in entity entailment is focused on single word common nouns.…”
Section: Results and Error Analysismentioning
confidence: 94%
See 1 more Smart Citation
“…Finally, while the performance of the predicate entailment component reflects the current stateof-the-art (Berant et al, 2012;Han and Sun, 2016), the performance on entity entailment is much worse than the current state-of-the-art in this task as measured on common lexical inference test sets. We conjecture that this stems from the nature of the entities in our dataset, consisting of both named entities and common nouns, many of which are multi-word expressions, whereas most work in entity entailment is focused on single word common nouns.…”
Section: Results and Error Analysismentioning
confidence: 94%
“…A threshold for the binary entailment decision was then calibrated on a held out development set. Finally, for Predicate Entailment we used the entailment rules extracted by Berant et al (2012).…”
Section: Baselinesmentioning
confidence: 99%
“…This is typically done under a supervised framework (Berant, Dagan and Goldberger 2012;Weisman et al 2012). First, while the focus of this article is to increase the scalability of Web-based rule acquisition, a following step would be to further improve the quality of the acquired rules.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…To combine multi-context embeddings, we follow the general idea of Berant et al (2012) who train an SVM to combine different similarity score features to learn textual entailment relations. Similarly, we train a Multilayer Perceptron (MLP) binary classifier that predicts whether a candidate term should be part of the expanded set based on 10 similarity scores (considered as input features), using the above 2 different scoring methods for each of the 5 context types.…”
Section: Multi-context Seed-candidate Similaritymentioning
confidence: 99%