Proceedings of the 13th Linguistic Annotation Workshop 2019
DOI: 10.18653/v1/w19-4004
|View full text |Cite
|
Sign up to set email alerts
|

Annotating and analyzing the interactions between meaning relations

Abstract: Pairs of sentences, phrases, or other text pieces can hold semantic relations such as paraphrasing, textual entailment, contradiction, specificity, and semantic similarity. These relations are usually studied in isolation and no dataset exists where they can be compared empirically. Here we present a corpus annotated with these relations and the analysis of these results. The corpus contains 520 sentence pairs, annotated with these relations. We measure the annotation reliability of each individual relation an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 21 publications
1
8
0
Order By: Relevance
“…posed by Gold et al (2019). The authors first semiautomatically generated a large pool of premisehypothesis pairs.…”
Section: Crowd "Annotation" Strategymentioning
confidence: 99%
See 2 more Smart Citations
“…posed by Gold et al (2019). The authors first semiautomatically generated a large pool of premisehypothesis pairs.…”
Section: Crowd "Annotation" Strategymentioning
confidence: 99%
“…The "true-false" strategy combines one true and one false sentence derived from the same premise. Unlike Gold et al (2019), we do not include a random pairing and do not downsample "false-false" and "true-false" strategies. We randomly selected 2,000 of the pairs for annotation, ensuring equal distribution of strategies.…”
Section: Pair Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Similar observations have been made for Textual Entailment (Sammons et al, 2010;Cabrio and Magnini, 2014). Gold et al (2019) study the interactions between paraphrasing and entailment.…”
Section: Related Workmentioning
confidence: 99%
“…Random sampling of examples has been successfully used in tasks such as Semantic Textual Similarity (STS) and Natural Language Inference (NLI)(Agirre et al, 2013;Gold et al, 2019) to obtain a reasonable lower-bound. Comparison with random baseline demonstrates that our system selects examples that can improve human performance.…”
mentioning
confidence: 99%