Proceedings of the Eighth International Conference on Computational Semantics - IWCS-8 '09 2009
DOI: 10.3115/1693756.1693772
|View full text |Cite
|
Sign up to set email alerts
|

An extended model of natural logic

Abstract: We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward thro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
129
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 149 publications
(130 citation statements)
references
References 16 publications
1
129
0
Order By: Relevance
“…Prior work has used natural logic for RTE-style textual entailment, as a formalism well-suited for formal semantics in neural networks, and as a framework for common-sense reasoning (MacCartney and Manning, 2009;Watanabe et al, 2012;Bowman et al, 2014;Angeli and Manning, 2013). We adopt the precise semantics of Icard and Moss (2014).…”
Section: Related Workmentioning
confidence: 99%
“…Prior work has used natural logic for RTE-style textual entailment, as a formalism well-suited for formal semantics in neural networks, and as a framework for common-sense reasoning (MacCartney and Manning, 2009;Watanabe et al, 2012;Bowman et al, 2014;Angeli and Manning, 2013). We adopt the precise semantics of Icard and Moss (2014).…”
Section: Related Workmentioning
confidence: 99%
“…In this task, also known as recognizing textual entailment (Cooper et al, 1996;Fyodorov et al, 2000;Condoravdi et al, 2003;Bos and Markert, 2005;Dagan et al, 2006;MacCartney and Manning, 2009), a model is presented with a pair of sentences-like one of those in Figure 1-and asked to judge the relationship between their meanings by picking a label from a small set: typically ENTAILMENT, NEUTRAL, and CONTRADICTION. Succeeding at NLI does not require a system to solve any difficult machine learning problems except, crucially, that of extracting effective and thorough representations for the meanings of sentences (i.e., their lexical and compositional semantics).…”
Section: Introductionmentioning
confidence: 99%
“…For example, More than two perl scripts work, can entail More than two scripts work, using a subgraph in the first argument, but Fewer than two scripts work, can entail Fewer than two perl scripts work, using a supergraph in the first argument. This consideration is similar to those observed in representations based on natural logic (MacCartney and Manning, 2009) which also uses low-level matching to perform some kinds of inference, but representations based on natural logic typically exclude other forms of inference, whereas the present model does not.…”
Section: Introductionmentioning
confidence: 73%